Test Report: KVM_Linux_crio 18943

                    
                      a95fbdf9550db8c431fa5a4c330192118acd2cbf:2024-09-01:36027
                    
                

Test fail (13/270)

x
+
TestAddons/parallel/Registry (74.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.243238ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-6fb4cdfc84-gxktn" [1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004636306s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-proxy-n7rzz" [49867dc1-8d92-48f0-8c8b-50a65936ad12] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004410556s
addons_test.go:342: (dbg) Run:  kubectl --context addons-132210 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-132210 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-132210 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.098363276s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-132210 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 ip
2024/08/31 22:19:13 [DEBUG] GET http://192.168.39.12:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-132210 -n addons-132210
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 logs -n 25: (2.104416947s)
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p download-only-160287                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-160287                                                                     | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| start   | -o=json --download-only                                                                     | download-only-777221 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | -p download-only-777221                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-777221                                                                     | download-only-777221 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-160287                                                                     | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-777221                                                                     | download-only-777221 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-465268 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | binary-mirror-465268                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45273                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-465268                                                                     | binary-mirror-465268 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-132210 --wait=true                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-132210 ssh cat                                                                       | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | /opt/local-path-provisioner/pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-132210 addons                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-132210 addons                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | -p addons-132210                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-132210 ip                                                                            | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:37.544876   21098 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:37.545155   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:37.545165   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:37.545172   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:37.545383   21098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:06:37.545946   21098 out.go:352] Setting JSON to false
	I0831 22:06:37.546798   21098 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2945,"bootTime":1725139053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:06:37.546859   21098 start.go:139] virtualization: kvm guest
	I0831 22:06:37.548701   21098 out.go:177] * [addons-132210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:06:37.550111   21098 notify.go:220] Checking for updates...
	I0831 22:06:37.550129   21098 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:06:37.551500   21098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:37.552938   21098 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:06:37.554280   21098 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:37.555749   21098 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:06:37.557091   21098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:06:37.558401   21098 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:37.589360   21098 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 22:06:37.590841   21098 start.go:297] selected driver: kvm2
	I0831 22:06:37.590856   21098 start.go:901] validating driver "kvm2" against <nil>
	I0831 22:06:37.590868   21098 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:06:37.591824   21098 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:37.591929   21098 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:06:37.606642   21098 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:06:37.606704   21098 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:37.606922   21098 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:06:37.606953   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:06:37.606960   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:06:37.606967   21098 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:37.607020   21098 start.go:340] cluster config:
	{Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:37.607103   21098 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:37.608999   21098 out.go:177] * Starting "addons-132210" primary control-plane node in "addons-132210" cluster
	I0831 22:06:37.610406   21098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:06:37.610441   21098 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:06:37.610451   21098 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:37.610537   21098 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:06:37.610551   21098 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:06:37.610893   21098 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json ...
	I0831 22:06:37.610917   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json: {Name:mk700584d59ad42df80709b4fc4c500ed7306a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:37.611077   21098 start.go:360] acquireMachinesLock for addons-132210: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:06:37.611133   21098 start.go:364] duration metric: took 40.383µs to acquireMachinesLock for "addons-132210"
	I0831 22:06:37.611156   21098 start.go:93] Provisioning new machine with config: &{Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:06:37.611223   21098 start.go:125] createHost starting for "" (driver="kvm2")
	I0831 22:06:37.613166   21098 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0831 22:06:37.613301   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:06:37.613345   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:06:37.627241   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0831 22:06:37.627637   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:06:37.628132   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:06:37.628166   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:06:37.628421   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:06:37.628636   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:06:37.628770   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:06:37.628882   21098 start.go:159] libmachine.API.Create for "addons-132210" (driver="kvm2")
	I0831 22:06:37.628903   21098 client.go:168] LocalClient.Create starting
	I0831 22:06:37.628944   21098 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:06:37.824136   21098 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:06:38.014796   21098 main.go:141] libmachine: Running pre-create checks...
	I0831 22:06:38.014823   21098 main.go:141] libmachine: (addons-132210) Calling .PreCreateCheck
	I0831 22:06:38.015353   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:06:38.015789   21098 main.go:141] libmachine: Creating machine...
	I0831 22:06:38.015803   21098 main.go:141] libmachine: (addons-132210) Calling .Create
	I0831 22:06:38.015942   21098 main.go:141] libmachine: (addons-132210) Creating KVM machine...
	I0831 22:06:38.017102   21098 main.go:141] libmachine: (addons-132210) DBG | found existing default KVM network
	I0831 22:06:38.017881   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.017718   21120 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0831 22:06:38.017904   21098 main.go:141] libmachine: (addons-132210) DBG | created network xml: 
	I0831 22:06:38.017916   21098 main.go:141] libmachine: (addons-132210) DBG | <network>
	I0831 22:06:38.017928   21098 main.go:141] libmachine: (addons-132210) DBG |   <name>mk-addons-132210</name>
	I0831 22:06:38.017940   21098 main.go:141] libmachine: (addons-132210) DBG |   <dns enable='no'/>
	I0831 22:06:38.017950   21098 main.go:141] libmachine: (addons-132210) DBG |   
	I0831 22:06:38.017970   21098 main.go:141] libmachine: (addons-132210) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0831 22:06:38.017978   21098 main.go:141] libmachine: (addons-132210) DBG |     <dhcp>
	I0831 22:06:38.017991   21098 main.go:141] libmachine: (addons-132210) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0831 22:06:38.018001   21098 main.go:141] libmachine: (addons-132210) DBG |     </dhcp>
	I0831 22:06:38.018013   21098 main.go:141] libmachine: (addons-132210) DBG |   </ip>
	I0831 22:06:38.018023   21098 main.go:141] libmachine: (addons-132210) DBG |   
	I0831 22:06:38.018033   21098 main.go:141] libmachine: (addons-132210) DBG | </network>
	I0831 22:06:38.018046   21098 main.go:141] libmachine: (addons-132210) DBG | 
	I0831 22:06:38.023383   21098 main.go:141] libmachine: (addons-132210) DBG | trying to create private KVM network mk-addons-132210 192.168.39.0/24...
	I0831 22:06:38.089434   21098 main.go:141] libmachine: (addons-132210) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 ...
	I0831 22:06:38.089471   21098 main.go:141] libmachine: (addons-132210) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:06:38.089479   21098 main.go:141] libmachine: (addons-132210) DBG | private KVM network mk-addons-132210 192.168.39.0/24 created
	I0831 22:06:38.089493   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.089368   21120 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:38.089534   21098 main.go:141] libmachine: (addons-132210) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:06:38.337644   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.337536   21120 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa...
	I0831 22:06:38.706397   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.706261   21120 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/addons-132210.rawdisk...
	I0831 22:06:38.706425   21098 main.go:141] libmachine: (addons-132210) DBG | Writing magic tar header
	I0831 22:06:38.706435   21098 main.go:141] libmachine: (addons-132210) DBG | Writing SSH key tar header
	I0831 22:06:38.706447   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.706368   21120 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 ...
	I0831 22:06:38.706460   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210
	I0831 22:06:38.706528   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 (perms=drwx------)
	I0831 22:06:38.706557   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:06:38.706570   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:06:38.706579   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:38.706596   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:06:38.706607   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:06:38.706621   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:06:38.706633   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:06:38.706649   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:06:38.706662   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:06:38.706672   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:06:38.706683   21098 main.go:141] libmachine: (addons-132210) Creating domain...
	I0831 22:06:38.706692   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home
	I0831 22:06:38.706704   21098 main.go:141] libmachine: (addons-132210) DBG | Skipping /home - not owner
	I0831 22:06:38.707726   21098 main.go:141] libmachine: (addons-132210) define libvirt domain using xml: 
	I0831 22:06:38.707749   21098 main.go:141] libmachine: (addons-132210) <domain type='kvm'>
	I0831 22:06:38.707757   21098 main.go:141] libmachine: (addons-132210)   <name>addons-132210</name>
	I0831 22:06:38.707766   21098 main.go:141] libmachine: (addons-132210)   <memory unit='MiB'>4000</memory>
	I0831 22:06:38.707792   21098 main.go:141] libmachine: (addons-132210)   <vcpu>2</vcpu>
	I0831 22:06:38.707816   21098 main.go:141] libmachine: (addons-132210)   <features>
	I0831 22:06:38.707830   21098 main.go:141] libmachine: (addons-132210)     <acpi/>
	I0831 22:06:38.707843   21098 main.go:141] libmachine: (addons-132210)     <apic/>
	I0831 22:06:38.707865   21098 main.go:141] libmachine: (addons-132210)     <pae/>
	I0831 22:06:38.707885   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.707895   21098 main.go:141] libmachine: (addons-132210)   </features>
	I0831 22:06:38.707905   21098 main.go:141] libmachine: (addons-132210)   <cpu mode='host-passthrough'>
	I0831 22:06:38.707915   21098 main.go:141] libmachine: (addons-132210)   
	I0831 22:06:38.707924   21098 main.go:141] libmachine: (addons-132210)   </cpu>
	I0831 22:06:38.707929   21098 main.go:141] libmachine: (addons-132210)   <os>
	I0831 22:06:38.707936   21098 main.go:141] libmachine: (addons-132210)     <type>hvm</type>
	I0831 22:06:38.707942   21098 main.go:141] libmachine: (addons-132210)     <boot dev='cdrom'/>
	I0831 22:06:38.707948   21098 main.go:141] libmachine: (addons-132210)     <boot dev='hd'/>
	I0831 22:06:38.707954   21098 main.go:141] libmachine: (addons-132210)     <bootmenu enable='no'/>
	I0831 22:06:38.707960   21098 main.go:141] libmachine: (addons-132210)   </os>
	I0831 22:06:38.707966   21098 main.go:141] libmachine: (addons-132210)   <devices>
	I0831 22:06:38.707975   21098 main.go:141] libmachine: (addons-132210)     <disk type='file' device='cdrom'>
	I0831 22:06:38.708007   21098 main.go:141] libmachine: (addons-132210)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/boot2docker.iso'/>
	I0831 22:06:38.708027   21098 main.go:141] libmachine: (addons-132210)       <target dev='hdc' bus='scsi'/>
	I0831 22:06:38.708034   21098 main.go:141] libmachine: (addons-132210)       <readonly/>
	I0831 22:06:38.708039   21098 main.go:141] libmachine: (addons-132210)     </disk>
	I0831 22:06:38.708051   21098 main.go:141] libmachine: (addons-132210)     <disk type='file' device='disk'>
	I0831 22:06:38.708065   21098 main.go:141] libmachine: (addons-132210)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:06:38.708082   21098 main.go:141] libmachine: (addons-132210)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/addons-132210.rawdisk'/>
	I0831 22:06:38.708092   21098 main.go:141] libmachine: (addons-132210)       <target dev='hda' bus='virtio'/>
	I0831 22:06:38.708106   21098 main.go:141] libmachine: (addons-132210)     </disk>
	I0831 22:06:38.708123   21098 main.go:141] libmachine: (addons-132210)     <interface type='network'>
	I0831 22:06:38.708137   21098 main.go:141] libmachine: (addons-132210)       <source network='mk-addons-132210'/>
	I0831 22:06:38.708149   21098 main.go:141] libmachine: (addons-132210)       <model type='virtio'/>
	I0831 22:06:38.708162   21098 main.go:141] libmachine: (addons-132210)     </interface>
	I0831 22:06:38.708173   21098 main.go:141] libmachine: (addons-132210)     <interface type='network'>
	I0831 22:06:38.708181   21098 main.go:141] libmachine: (addons-132210)       <source network='default'/>
	I0831 22:06:38.708190   21098 main.go:141] libmachine: (addons-132210)       <model type='virtio'/>
	I0831 22:06:38.708213   21098 main.go:141] libmachine: (addons-132210)     </interface>
	I0831 22:06:38.708228   21098 main.go:141] libmachine: (addons-132210)     <serial type='pty'>
	I0831 22:06:38.708239   21098 main.go:141] libmachine: (addons-132210)       <target port='0'/>
	I0831 22:06:38.708252   21098 main.go:141] libmachine: (addons-132210)     </serial>
	I0831 22:06:38.708262   21098 main.go:141] libmachine: (addons-132210)     <console type='pty'>
	I0831 22:06:38.708276   21098 main.go:141] libmachine: (addons-132210)       <target type='serial' port='0'/>
	I0831 22:06:38.708292   21098 main.go:141] libmachine: (addons-132210)     </console>
	I0831 22:06:38.708304   21098 main.go:141] libmachine: (addons-132210)     <rng model='virtio'>
	I0831 22:06:38.708316   21098 main.go:141] libmachine: (addons-132210)       <backend model='random'>/dev/random</backend>
	I0831 22:06:38.708328   21098 main.go:141] libmachine: (addons-132210)     </rng>
	I0831 22:06:38.708338   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.708349   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.708362   21098 main.go:141] libmachine: (addons-132210)   </devices>
	I0831 22:06:38.708377   21098 main.go:141] libmachine: (addons-132210) </domain>
	I0831 22:06:38.708386   21098 main.go:141] libmachine: (addons-132210) 
	I0831 22:06:38.714749   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:04:9d:ea in network default
	I0831 22:06:38.715229   21098 main.go:141] libmachine: (addons-132210) Ensuring networks are active...
	I0831 22:06:38.715251   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:38.715857   21098 main.go:141] libmachine: (addons-132210) Ensuring network default is active
	I0831 22:06:38.716174   21098 main.go:141] libmachine: (addons-132210) Ensuring network mk-addons-132210 is active
	I0831 22:06:38.716662   21098 main.go:141] libmachine: (addons-132210) Getting domain xml...
	I0831 22:06:38.717336   21098 main.go:141] libmachine: (addons-132210) Creating domain...
	I0831 22:06:40.114794   21098 main.go:141] libmachine: (addons-132210) Waiting to get IP...
	I0831 22:06:40.115527   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.115799   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.115829   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.115776   21120 retry.go:31] will retry after 204.646064ms: waiting for machine to come up
	I0831 22:06:40.322141   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.322530   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.322561   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.322474   21120 retry.go:31] will retry after 367.388706ms: waiting for machine to come up
	I0831 22:06:40.691020   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.691359   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.691385   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.691306   21120 retry.go:31] will retry after 449.926201ms: waiting for machine to come up
	I0831 22:06:41.142806   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:41.143371   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:41.143398   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:41.143199   21120 retry.go:31] will retry after 411.198107ms: waiting for machine to come up
	I0831 22:06:41.555507   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:41.556022   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:41.556044   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:41.555945   21120 retry.go:31] will retry after 684.989531ms: waiting for machine to come up
	I0831 22:06:42.242958   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:42.243440   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:42.243461   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:42.243416   21120 retry.go:31] will retry after 922.263131ms: waiting for machine to come up
	I0831 22:06:43.167145   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:43.167604   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:43.167629   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:43.167554   21120 retry.go:31] will retry after 879.584878ms: waiting for machine to come up
	I0831 22:06:44.048638   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:44.048976   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:44.048997   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:44.048933   21120 retry.go:31] will retry after 1.427746455s: waiting for machine to come up
	I0831 22:06:45.478039   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:45.478640   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:45.478666   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:45.478603   21120 retry.go:31] will retry after 1.190362049s: waiting for machine to come up
	I0831 22:06:46.671043   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:46.671501   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:46.671530   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:46.671448   21120 retry.go:31] will retry after 2.196766808s: waiting for machine to come up
	I0831 22:06:48.869585   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:48.870037   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:48.870059   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:48.869999   21120 retry.go:31] will retry after 2.216870251s: waiting for machine to come up
	I0831 22:06:51.089344   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:51.089783   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:51.089804   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:51.089726   21120 retry.go:31] will retry after 3.489292564s: waiting for machine to come up
	I0831 22:06:54.581936   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:54.582398   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:54.582426   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:54.582313   21120 retry.go:31] will retry after 2.860598857s: waiting for machine to come up
	I0831 22:06:57.446192   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:57.446589   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:57.446614   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:57.446501   21120 retry.go:31] will retry after 4.269318205s: waiting for machine to come up
	I0831 22:07:01.720788   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.721275   21098 main.go:141] libmachine: (addons-132210) Found IP for machine: 192.168.39.12
	I0831 22:07:01.721302   21098 main.go:141] libmachine: (addons-132210) Reserving static IP address...
	I0831 22:07:01.721320   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has current primary IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.721673   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find host DHCP lease matching {name: "addons-132210", mac: "52:54:00:35:a4:57", ip: "192.168.39.12"} in network mk-addons-132210
	I0831 22:07:01.793692   21098 main.go:141] libmachine: (addons-132210) DBG | Getting to WaitForSSH function...
	I0831 22:07:01.793719   21098 main.go:141] libmachine: (addons-132210) Reserved static IP address: 192.168.39.12
	I0831 22:07:01.793733   21098 main.go:141] libmachine: (addons-132210) Waiting for SSH to be available...
	I0831 22:07:01.796008   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.796380   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:01.796413   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.796552   21098 main.go:141] libmachine: (addons-132210) DBG | Using SSH client type: external
	I0831 22:07:01.796581   21098 main.go:141] libmachine: (addons-132210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa (-rw-------)
	I0831 22:07:01.796618   21098 main.go:141] libmachine: (addons-132210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:07:01.796631   21098 main.go:141] libmachine: (addons-132210) DBG | About to run SSH command:
	I0831 22:07:01.796665   21098 main.go:141] libmachine: (addons-132210) DBG | exit 0
	I0831 22:07:01.927398   21098 main.go:141] libmachine: (addons-132210) DBG | SSH cmd err, output: <nil>: 
	I0831 22:07:01.927709   21098 main.go:141] libmachine: (addons-132210) KVM machine creation complete!
	I0831 22:07:01.928053   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:07:01.928588   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:01.928805   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:01.928982   21098 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:07:01.928996   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:01.930232   21098 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:07:01.930250   21098 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:07:01.930278   21098 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:07:01.930291   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:01.932160   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.932434   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:01.932466   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.932569   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:01.932748   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:01.932899   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:01.933022   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:01.933173   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:01.933359   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:01.933371   21098 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:07:02.030631   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:07:02.030654   21098 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:07:02.030661   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.033292   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.033728   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.033761   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.033978   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.034178   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.034350   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.034509   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.034664   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.034840   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.034854   21098 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:07:02.136244   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:07:02.136350   21098 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:07:02.136362   21098 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:07:02.136370   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.136633   21098 buildroot.go:166] provisioning hostname "addons-132210"
	I0831 22:07:02.136653   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.136838   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.139916   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.140414   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.140447   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.140679   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.140892   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.141063   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.141293   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.141484   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.141657   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.141672   21098 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-132210 && echo "addons-132210" | sudo tee /etc/hostname
	I0831 22:07:02.253631   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-132210
	
	I0831 22:07:02.253688   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.256261   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.256636   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.256662   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.256793   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.256965   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.257118   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.257266   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.257410   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.257558   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.257579   21098 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-132210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-132210/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-132210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:07:02.369069   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:07:02.369101   21098 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:07:02.369138   21098 buildroot.go:174] setting up certificates
	I0831 22:07:02.369148   21098 provision.go:84] configureAuth start
	I0831 22:07:02.369159   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.369509   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:02.372462   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.372743   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.372769   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.372894   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.375363   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.375809   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.375831   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.376027   21098 provision.go:143] copyHostCerts
	I0831 22:07:02.376110   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:07:02.376256   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:07:02.376417   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:07:02.376622   21098 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.addons-132210 san=[127.0.0.1 192.168.39.12 addons-132210 localhost minikube]
	I0831 22:07:02.529409   21098 provision.go:177] copyRemoteCerts
	I0831 22:07:02.529465   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:07:02.529485   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.531858   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.532087   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.532145   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.532288   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.532439   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.532600   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.532744   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:02.614769   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:07:02.640733   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:07:02.666643   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:07:02.692178   21098 provision.go:87] duration metric: took 323.018181ms to configureAuth
	I0831 22:07:02.692206   21098 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:07:02.692406   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:02.692494   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.695406   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.695687   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.695718   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.695909   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.696178   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.696371   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.696472   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.696596   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.696771   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.696792   21098 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:07:02.919512   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:07:02.919537   21098 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:07:02.919546   21098 main.go:141] libmachine: (addons-132210) Calling .GetURL
	I0831 22:07:02.920835   21098 main.go:141] libmachine: (addons-132210) DBG | Using libvirt version 6000000
	I0831 22:07:02.923016   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.923361   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.923391   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.923525   21098 main.go:141] libmachine: Docker is up and running!
	I0831 22:07:02.923543   21098 main.go:141] libmachine: Reticulating splines...
	I0831 22:07:02.923552   21098 client.go:171] duration metric: took 25.29463901s to LocalClient.Create
	I0831 22:07:02.923574   21098 start.go:167] duration metric: took 25.294693611s to libmachine.API.Create "addons-132210"
	I0831 22:07:02.923584   21098 start.go:293] postStartSetup for "addons-132210" (driver="kvm2")
	I0831 22:07:02.923593   21098 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:07:02.923609   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:02.923852   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:07:02.923871   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.925703   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.926011   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.926030   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.926155   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.926317   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.926442   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.926556   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.006717   21098 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:07:03.011232   21098 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:07:03.011262   21098 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:07:03.011362   21098 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:07:03.011394   21098 start.go:296] duration metric: took 87.804145ms for postStartSetup
	I0831 22:07:03.011427   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:07:03.012028   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:03.014629   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.014960   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.014988   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.015270   21098 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json ...
	I0831 22:07:03.015499   21098 start.go:128] duration metric: took 25.404265309s to createHost
	I0831 22:07:03.015523   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.017928   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.018268   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.018291   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.018500   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.018686   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.018822   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.018966   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.019111   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:03.019276   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:03.019286   21098 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:07:03.120128   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725142023.097010301
	
	I0831 22:07:03.120147   21098 fix.go:216] guest clock: 1725142023.097010301
	I0831 22:07:03.120190   21098 fix.go:229] Guest: 2024-08-31 22:07:03.097010301 +0000 UTC Remote: 2024-08-31 22:07:03.015511488 +0000 UTC m=+25.502821103 (delta=81.498813ms)
	I0831 22:07:03.120212   21098 fix.go:200] guest clock delta is within tolerance: 81.498813ms
	I0831 22:07:03.120217   21098 start.go:83] releasing machines lock for "addons-132210", held for 25.509073174s
	I0831 22:07:03.120236   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.120504   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:03.123087   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.123415   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.123439   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.123594   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124139   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124328   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124419   21098 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:07:03.124455   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.124550   21098 ssh_runner.go:195] Run: cat /version.json
	I0831 22:07:03.124566   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.127123   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127348   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127456   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.127478   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127620   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.127797   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.127815   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127860   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.127949   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.128037   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.128111   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.128172   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.128232   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.128351   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.200298   21098 ssh_runner.go:195] Run: systemctl --version
	I0831 22:07:03.227274   21098 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:07:03.385642   21098 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:07:03.391833   21098 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:07:03.391895   21098 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:07:03.410079   21098 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:07:03.410103   21098 start.go:495] detecting cgroup driver to use...
	I0831 22:07:03.410164   21098 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:07:03.427440   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:07:03.442818   21098 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:07:03.442873   21098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:07:03.457961   21098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:07:03.472688   21098 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:07:03.587297   21098 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:07:03.750451   21098 docker.go:233] disabling docker service ...
	I0831 22:07:03.750529   21098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:07:03.765720   21098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:07:03.779301   21098 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:07:03.904389   21098 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:07:04.017402   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:07:04.032166   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:07:04.050757   21098 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:07:04.050832   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.061287   21098 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:07:04.061357   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.071771   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.082266   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.092904   21098 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:07:04.103797   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.114937   21098 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.132389   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.142812   21098 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:07:04.152012   21098 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:07:04.152067   21098 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:07:04.165405   21098 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:07:04.174718   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:04.283822   21098 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:07:04.383793   21098 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:07:04.383893   21098 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:07:04.388685   21098 start.go:563] Will wait 60s for crictl version
	I0831 22:07:04.388753   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:07:04.392620   21098 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:07:04.444477   21098 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:07:04.444598   21098 ssh_runner.go:195] Run: crio --version
	I0831 22:07:04.473736   21098 ssh_runner.go:195] Run: crio --version
	I0831 22:07:04.503698   21098 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:07:04.505075   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:04.507671   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:04.508005   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:04.508029   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:04.508213   21098 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:07:04.512325   21098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:07:04.525355   21098 kubeadm.go:883] updating cluster {Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:07:04.525461   21098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:07:04.525500   21098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:07:04.558664   21098 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0831 22:07:04.558743   21098 ssh_runner.go:195] Run: which lz4
	I0831 22:07:04.562947   21098 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 22:07:04.567112   21098 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 22:07:04.567139   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0831 22:07:05.903076   21098 crio.go:462] duration metric: took 1.340167325s to copy over tarball
	I0831 22:07:05.903140   21098 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 22:07:08.148415   21098 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.245250117s)
	I0831 22:07:08.148446   21098 crio.go:469] duration metric: took 2.245343942s to extract the tarball
	I0831 22:07:08.148455   21098 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 22:07:08.185382   21098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:07:08.228652   21098 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:07:08.228676   21098 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:07:08.228684   21098 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.31.0 crio true true} ...
	I0831 22:07:08.228785   21098 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-132210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:07:08.228868   21098 ssh_runner.go:195] Run: crio config
	I0831 22:07:08.272478   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:07:08.272508   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:07:08.272527   21098 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:07:08.272550   21098 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-132210 NodeName:addons-132210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:07:08.272727   21098 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-132210"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:07:08.272797   21098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:07:08.282654   21098 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:07:08.282722   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:07:08.292061   21098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:07:08.308679   21098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:07:08.324837   21098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0831 22:07:08.341642   21098 ssh_runner.go:195] Run: grep 192.168.39.12	control-plane.minikube.internal$ /etc/hosts
	I0831 22:07:08.345567   21098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:07:08.357961   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:08.466928   21098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:08.482753   21098 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210 for IP: 192.168.39.12
	I0831 22:07:08.482776   21098 certs.go:194] generating shared ca certs ...
	I0831 22:07:08.482790   21098 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.482937   21098 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:07:08.597311   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt ...
	I0831 22:07:08.597339   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt: {Name:mkfc4c408c230132bbe7fe213eeea10a6827c0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.597509   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key ...
	I0831 22:07:08.597520   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key: {Name:mkd43af6d176eb1599961c21c4cf9cd0b89179f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.597585   21098 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:07:08.724372   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt ...
	I0831 22:07:08.724403   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt: {Name:mk9535d600107772240a5a04a39fba46922be0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.724563   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key ...
	I0831 22:07:08.724574   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key: {Name:mkde040c84f81ae9d500962d5b2c7d3a71ca66c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.724640   21098 certs.go:256] generating profile certs ...
	I0831 22:07:08.724688   21098 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key
	I0831 22:07:08.724702   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt with IP's: []
	I0831 22:07:08.875287   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt ...
	I0831 22:07:08.875314   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: {Name:mk5db0031ee87d851d15425d75d7b2faf9a2a074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.875490   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key ...
	I0831 22:07:08.875501   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key: {Name:mk19417e85915a2da4d854ab40b604380b362ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.875569   21098 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573
	I0831 22:07:08.875586   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12]
	I0831 22:07:08.931384   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 ...
	I0831 22:07:08.931413   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573: {Name:mk348633e181ba1f2f701144ddd9247b046d96ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.931554   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573 ...
	I0831 22:07:08.931567   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573: {Name:mk786aa380be6f62aca47aa829b55a6abecc88d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.931632   21098 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt
	I0831 22:07:08.931712   21098 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key
	I0831 22:07:08.931760   21098 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key
	I0831 22:07:08.931777   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt with IP's: []
	I0831 22:07:08.977840   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt ...
	I0831 22:07:08.977870   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt: {Name:mk26c70606574ad0633e48cf1995428b32594850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.978036   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key ...
	I0831 22:07:08.978047   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key: {Name:mk7a0020fb4b16382f09b75c285c938b4e52843a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.978220   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:07:08.978258   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:07:08.978282   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:07:08.978303   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:07:08.978949   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:07:09.004455   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:07:09.029604   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:07:09.053313   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:07:09.077554   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:07:09.102196   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:07:09.127069   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:07:09.153769   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:07:09.180539   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:07:09.206167   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:07:09.224663   21098 ssh_runner.go:195] Run: openssl version
	I0831 22:07:09.230496   21098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:07:09.241375   21098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.246377   21098 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.246454   21098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.252587   21098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:07:09.263592   21098 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:07:09.267795   21098 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:07:09.267846   21098 kubeadm.go:392] StartCluster: {Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:07:09.267917   21098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:07:09.267965   21098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:07:09.309105   21098 cri.go:89] found id: ""
	I0831 22:07:09.309176   21098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:07:09.319285   21098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:07:09.333293   21098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:07:09.348394   21098 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:07:09.348414   21098 kubeadm.go:157] found existing configuration files:
	
	I0831 22:07:09.348466   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:07:09.358972   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:07:09.359049   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:07:09.370609   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:07:09.382278   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:07:09.382347   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:07:09.393363   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:07:09.403425   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:07:09.403501   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:07:09.414483   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:07:09.425120   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:07:09.425188   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:07:09.436044   21098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 22:07:09.489573   21098 kubeadm.go:310] W0831 22:07:09.473217     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:07:09.490547   21098 kubeadm.go:310] W0831 22:07:09.474222     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:07:09.600273   21098 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:07:19.334217   21098 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:07:19.334291   21098 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:07:19.334389   21098 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:07:19.334542   21098 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:07:19.334652   21098 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:07:19.334708   21098 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:07:19.336431   21098 out.go:235]   - Generating certificates and keys ...
	I0831 22:07:19.336518   21098 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:07:19.336608   21098 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:07:19.336691   21098 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:07:19.336759   21098 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:07:19.336849   21098 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:07:19.336925   21098 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:07:19.337003   21098 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:07:19.337137   21098 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-132210 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0831 22:07:19.337224   21098 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:07:19.337376   21098 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-132210 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0831 22:07:19.337459   21098 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:07:19.337525   21098 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:07:19.337585   21098 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:07:19.337668   21098 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:07:19.337742   21098 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:07:19.337831   21098 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:07:19.337921   21098 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:07:19.338006   21098 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:07:19.338077   21098 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:07:19.338185   21098 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:07:19.338278   21098 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:07:19.340682   21098 out.go:235]   - Booting up control plane ...
	I0831 22:07:19.340798   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:07:19.340931   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:07:19.341031   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:07:19.341176   21098 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:07:19.341298   21098 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:07:19.341358   21098 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:07:19.341525   21098 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:07:19.341674   21098 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:07:19.341768   21098 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001861689s
	I0831 22:07:19.341842   21098 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:07:19.341928   21098 kubeadm.go:310] [api-check] The API server is healthy after 5.002243064s
	I0831 22:07:19.342094   21098 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:07:19.342281   21098 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:07:19.342371   21098 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:07:19.342560   21098 kubeadm.go:310] [mark-control-plane] Marking the node addons-132210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:07:19.342651   21098 kubeadm.go:310] [bootstrap-token] Using token: tds7o0.8p21t51ubuabfjmq
	I0831 22:07:19.344005   21098 out.go:235]   - Configuring RBAC rules ...
	I0831 22:07:19.344099   21098 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:07:19.344192   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:07:19.344360   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:07:19.344510   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:07:19.344781   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:07:19.344861   21098 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:07:19.344973   21098 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:07:19.345017   21098 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:07:19.345057   21098 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:07:19.345063   21098 kubeadm.go:310] 
	I0831 22:07:19.345111   21098 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:07:19.345117   21098 kubeadm.go:310] 
	I0831 22:07:19.345211   21098 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:07:19.345219   21098 kubeadm.go:310] 
	I0831 22:07:19.345240   21098 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:07:19.345289   21098 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:07:19.345334   21098 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:07:19.345340   21098 kubeadm.go:310] 
	I0831 22:07:19.345393   21098 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:07:19.345401   21098 kubeadm.go:310] 
	I0831 22:07:19.345443   21098 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:07:19.345452   21098 kubeadm.go:310] 
	I0831 22:07:19.345503   21098 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:07:19.345607   21098 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:07:19.345685   21098 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:07:19.345695   21098 kubeadm.go:310] 
	I0831 22:07:19.345816   21098 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:07:19.345897   21098 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:07:19.345903   21098 kubeadm.go:310] 
	I0831 22:07:19.345969   21098 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tds7o0.8p21t51ubuabfjmq \
	I0831 22:07:19.346062   21098 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e \
	I0831 22:07:19.346084   21098 kubeadm.go:310] 	--control-plane 
	I0831 22:07:19.346090   21098 kubeadm.go:310] 
	I0831 22:07:19.346184   21098 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:07:19.346195   21098 kubeadm.go:310] 
	I0831 22:07:19.346266   21098 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tds7o0.8p21t51ubuabfjmq \
	I0831 22:07:19.346370   21098 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e 
	I0831 22:07:19.346389   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:07:19.346398   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:07:19.347902   21098 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 22:07:19.348984   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 22:07:19.359846   21098 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 22:07:19.378926   21098 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:07:19.378983   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:19.379028   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-132210 minikube.k8s.io/updated_at=2024_08_31T22_07_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-132210 minikube.k8s.io/primary=true
	I0831 22:07:19.505912   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:19.528337   21098 ops.go:34] apiserver oom_adj: -16
	I0831 22:07:20.006130   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:20.506049   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:21.006229   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:21.506568   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:22.006961   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:22.506496   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.006336   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.506858   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.585460   21098 kubeadm.go:1113] duration metric: took 4.206527831s to wait for elevateKubeSystemPrivileges
	I0831 22:07:23.585486   21098 kubeadm.go:394] duration metric: took 14.317645494s to StartCluster
	I0831 22:07:23.585502   21098 settings.go:142] acquiring lock: {Name:mkec6b4f5d3301688503002977bc4d63aab7adcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:23.585612   21098 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:07:23.585914   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/kubeconfig: {Name:mkc6d6b60cc62b336d228fe4b49e098aa4d94f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:23.586102   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:07:23.586108   21098 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:07:23.586191   21098 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:07:23.586284   21098 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-132210"
	I0831 22:07:23.586294   21098 addons.go:69] Setting default-storageclass=true in profile "addons-132210"
	I0831 22:07:23.586299   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:23.586295   21098 addons.go:69] Setting cloud-spanner=true in profile "addons-132210"
	I0831 22:07:23.586317   21098 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-132210"
	I0831 22:07:23.586338   21098 addons.go:234] Setting addon cloud-spanner=true in "addons-132210"
	I0831 22:07:23.586334   21098 addons.go:69] Setting metrics-server=true in profile "addons-132210"
	I0831 22:07:23.586358   21098 addons.go:69] Setting inspektor-gadget=true in profile "addons-132210"
	I0831 22:07:23.586370   21098 addons.go:69] Setting helm-tiller=true in profile "addons-132210"
	I0831 22:07:23.586379   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586382   21098 addons.go:234] Setting addon inspektor-gadget=true in "addons-132210"
	I0831 22:07:23.586383   21098 addons.go:69] Setting storage-provisioner=true in profile "addons-132210"
	I0831 22:07:23.586392   21098 addons.go:234] Setting addon helm-tiller=true in "addons-132210"
	I0831 22:07:23.586403   21098 addons.go:234] Setting addon storage-provisioner=true in "addons-132210"
	I0831 22:07:23.586413   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586423   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586433   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586686   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586728   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586770   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586804   21098 addons.go:69] Setting registry=true in profile "addons-132210"
	I0831 22:07:23.586813   21098 addons.go:69] Setting volumesnapshots=true in profile "addons-132210"
	I0831 22:07:23.586825   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586832   21098 addons.go:234] Setting addon registry=true in "addons-132210"
	I0831 22:07:23.586844   21098 addons.go:234] Setting addon volumesnapshots=true in "addons-132210"
	I0831 22:07:23.586855   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586283   21098 addons.go:69] Setting yakd=true in profile "addons-132210"
	I0831 22:07:23.586867   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586889   21098 addons.go:234] Setting addon yakd=true in "addons-132210"
	I0831 22:07:23.586916   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586807   21098 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-132210"
	I0831 22:07:23.586988   21098 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-132210"
	I0831 22:07:23.587205   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587226   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587228   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586371   21098 addons.go:234] Setting addon metrics-server=true in "addons-132210"
	I0831 22:07:23.587269   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586360   21098 addons.go:69] Setting gcp-auth=true in profile "addons-132210"
	I0831 22:07:23.587294   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587296   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.587300   21098 mustload.go:65] Loading cluster: addons-132210
	I0831 22:07:23.587308   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587341   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587377   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586345   21098 addons.go:69] Setting ingress-dns=true in profile "addons-132210"
	I0831 22:07:23.587497   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:23.586770   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587534   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587643   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587679   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587497   21098 addons.go:234] Setting addon ingress-dns=true in "addons-132210"
	I0831 22:07:23.586789   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587724   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.587760   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586794   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586854   21098 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-132210"
	I0831 22:07:23.586783   21098 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-132210"
	I0831 22:07:23.587810   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587828   21098 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-132210"
	I0831 22:07:23.586798   21098 addons.go:69] Setting volcano=true in profile "addons-132210"
	I0831 22:07:23.587854   21098 addons.go:234] Setting addon volcano=true in "addons-132210"
	I0831 22:07:23.586331   21098 addons.go:69] Setting ingress=true in profile "addons-132210"
	I0831 22:07:23.587887   21098 addons.go:234] Setting addon ingress=true in "addons-132210"
	I0831 22:07:23.588117   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.588477   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.588503   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.588555   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.588574   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.588797   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.588819   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.589146   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.589162   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.589185   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.589230   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.589278   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.595405   21098 out.go:177] * Verifying Kubernetes components...
	I0831 22:07:23.599775   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:23.607898   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0831 22:07:23.608464   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
	I0831 22:07:23.608573   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0831 22:07:23.609061   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609163   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609490   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609665   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.609681   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.609938   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.609953   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.610031   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610054   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.610072   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.610147   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0831 22:07:23.610474   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610549   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0831 22:07:23.610740   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.610794   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610831   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.611018   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.611156   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.611170   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.611286   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.611299   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.611477   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.611618   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.611699   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.615775   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.615947   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.615974   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.616335   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.616370   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.621980   21098 addons.go:234] Setting addon default-storageclass=true in "addons-132210"
	I0831 22:07:23.622070   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.622457   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.622516   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.623860   21098 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-132210"
	I0831 22:07:23.623897   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.624221   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.624251   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.631854   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.615777   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.632193   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.615777   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.632797   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.632822   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0831 22:07:23.639452   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
	I0831 22:07:23.639483   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0831 22:07:23.640021   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.640140   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.640612   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.640631   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.640965   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.641062   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.641077   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.641147   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.641480   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.642095   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.642132   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.644079   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0831 22:07:23.644378   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.644778   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.644853   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.644867   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.644876   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.645175   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.645259   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.645287   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.645335   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I0831 22:07:23.645668   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.645683   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.645700   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.645993   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.646012   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.646152   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.646163   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.647040   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.647260   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.647648   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.647673   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.648054   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.649653   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.651862   21098 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:07:23.653359   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0831 22:07:23.653404   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:07:23.653419   21098 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:07:23.653443   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.653793   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.656591   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.657110   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.657148   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.657255   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.657289   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.657300   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.657746   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.657824   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.657895   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0831 22:07:23.657957   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.658358   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.658386   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.658390   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.658533   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.659277   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.659302   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.659683   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.659864   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.661487   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.663195   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:07:23.663288   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0831 22:07:23.663682   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.664270   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.664292   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.664416   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:07:23.664440   21098 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:07:23.664462   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.664598   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.665099   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.665137   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.668127   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.668154   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.668185   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.668378   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.668565   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.668732   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.668882   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.669430   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0831 22:07:23.669703   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.670101   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.670117   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.670405   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.671393   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.671430   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.672401   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0831 22:07:23.672405   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0831 22:07:23.672825   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.672904   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.673447   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.673475   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.673794   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.674020   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.674041   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.674092   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.674985   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.675528   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.675566   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.676624   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.678884   21098 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:07:23.680300   21098 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:23.680318   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:07:23.680341   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.681210   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0831 22:07:23.683715   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.683816   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0831 22:07:23.684416   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.684430   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.684488   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.684506   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.684593   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.684729   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.684885   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.684908   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.685078   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.685876   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.686073   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.686679   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0831 22:07:23.687155   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.687443   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.687626   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.687903   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.687917   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.688614   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.688628   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.688964   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.689489   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.689521   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.689640   21098 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:07:23.690115   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.690674   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.690712   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.690910   21098 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:23.690929   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:07:23.690949   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.693797   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.694203   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.694226   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.694378   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.694536   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.694652   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.694748   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.695907   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0831 22:07:23.696312   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.696776   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.696797   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.697094   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.697267   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.704189   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.704458   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0831 22:07:23.704894   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.705446   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.705465   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.705571   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0831 22:07:23.705976   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.706019   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:23.706276   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.706426   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.706438   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.706789   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.707335   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.707376   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.707662   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.708390   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0831 22:07:23.708421   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:23.708877   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.709389   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.709405   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.709467   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.709506   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44487
	I0831 22:07:23.709999   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.710056   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0831 22:07:23.710157   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0831 22:07:23.710455   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.710596   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.710831   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.710851   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.710876   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.710886   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:07:23.710934   21098 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:07:23.711123   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.711251   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.711467   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.711486   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.711519   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.712107   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.712202   21098 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:23.712222   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:07:23.712241   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.712501   21098 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:23.712517   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:07:23.712531   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.712669   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.712683   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.712710   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.712727   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37725
	I0831 22:07:23.712748   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.713405   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.713788   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.713855   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.714889   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.714908   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.715016   21098 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:07:23.715152   21098 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0831 22:07:23.715575   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.715816   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.716851   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.717255   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0831 22:07:23.717351   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.717594   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0831 22:07:23.717606   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0831 22:07:23.717622   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.718309   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.718412   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0831 22:07:23.718545   21098 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:07:23.718731   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.719156   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.719170   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.719236   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719258   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719522   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.719872   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.719904   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719936   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.719954   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.720069   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.720084   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.720095   21098 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:07:23.720107   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:07:23.720130   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.720444   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.720568   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.720598   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.720724   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.720879   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.720934   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.720979   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.721048   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.721785   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.721873   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.722229   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.723401   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.723420   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.723449   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.723458   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0831 22:07:23.723466   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.723623   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.723671   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.723988   21098 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:23.723999   21098 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:07:23.724001   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.724033   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.724011   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.724695   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.724718   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.724889   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.725405   21098 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:07:23.725476   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.725493   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.725933   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.726224   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.726494   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:07:23.726505   21098 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:07:23.726517   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.727867   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.728730   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.728793   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.729260   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.729288   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730267   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:07:23.730375   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730404   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.730417   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.730471   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.730484   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730629   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.730630   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.730777   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.730843   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.730978   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.731217   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.731708   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.731727   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.732701   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:07:23.733806   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0831 22:07:23.733914   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.734151   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.734236   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.734369   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.734573   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.734941   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.734955   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.735218   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:07:23.735423   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.735605   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.737637   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:07:23.737864   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.739670   21098 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:07:23.739673   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:07:23.740906   21098 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:23.740926   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:07:23.740944   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.742803   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:07:23.743050   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0831 22:07:23.743591   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.744134   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.744153   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.744225   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.744513   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.744683   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.744705   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.744736   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.744900   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.745356   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 22:07:23.745430   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.745580   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.745776   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.746229   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.746407   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:23.746416   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:23.746590   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:23.746598   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:23.746604   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:23.746609   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:23.746831   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:23.746844   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	W0831 22:07:23.746916   21098 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0831 22:07:23.748245   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:07:23.749403   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:07:23.749426   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:07:23.749442   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.751103   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
	I0831 22:07:23.751505   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.751960   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.751972   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.752271   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.752468   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.752488   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.752879   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.752892   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.753179   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.753384   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.753544   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.753666   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.753967   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	W0831 22:07:23.754404   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53982->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.754423   21098 retry.go:31] will retry after 201.037828ms: ssh: handshake failed: read tcp 192.168.39.1:53982->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.755597   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0831 22:07:23.755767   21098 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:07:23.755970   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.756401   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.756422   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.756792   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.756966   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.757169   21098 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:07:23.757183   21098 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:07:23.757195   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.758339   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.759819   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.760016   21098 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:07:23.760235   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.760273   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.760417   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.760619   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.760786   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.760948   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	W0831 22:07:23.761568   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53984->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.761587   21098 retry.go:31] will retry after 339.775685ms: ssh: handshake failed: read tcp 192.168.39.1:53984->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.762678   21098 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:07:23.764273   21098 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:23.764290   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:07:23.764302   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.767265   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.767714   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.767737   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.768009   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.768256   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	W0831 22:07:23.768259   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53988->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.768311   21098 retry.go:31] will retry after 253.843102ms: ssh: handshake failed: read tcp 192.168.39.1:53988->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.768409   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.768516   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	W0831 22:07:23.769143   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53996->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.769159   21098 retry.go:31] will retry after 228.687708ms: ssh: handshake failed: read tcp 192.168.39.1:53996->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:24.009671   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0831 22:07:24.009698   21098 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0831 22:07:24.035122   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:07:24.035143   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:07:24.096675   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:24.137383   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:07:24.137405   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:07:24.192363   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:24.208220   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:07:24.208244   21098 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:07:24.213758   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:24.294093   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:24.337682   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:24.337708   21098 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0831 22:07:24.355787   21098 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:07:24.355811   21098 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:07:24.397120   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:07:24.397152   21098 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:07:24.399259   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:07:24.399283   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:07:24.402180   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:24.414440   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:07:24.414467   21098 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:07:24.448723   21098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:24.448889   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:07:24.517279   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:24.544228   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:07:24.544262   21098 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:07:24.582484   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:07:24.582507   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:07:24.590888   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:24.616331   21098 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:24.616362   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:07:24.621087   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:07:24.621125   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:07:24.734564   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:24.734588   21098 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:07:24.758600   21098 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:07:24.758627   21098 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:07:24.761196   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:24.842914   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:24.842933   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:07:24.864484   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:07:24.864510   21098 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:07:24.881251   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:07:24.881275   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:07:24.905038   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:24.972031   21098 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:07:24.972050   21098 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:07:25.015374   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:25.038602   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:25.055589   21098 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:25.055612   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:07:25.151602   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:07:25.151634   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:07:25.172190   21098 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:07:25.172212   21098 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:07:25.405884   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:25.444500   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:07:25.444532   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:07:25.463903   21098 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:07:25.463928   21098 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:07:25.694161   21098 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:07:25.694186   21098 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:07:25.820674   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:07:25.820702   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:07:26.073362   21098 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:07:26.073394   21098 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:07:26.236676   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:07:26.236699   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:07:26.439580   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:07:26.439601   21098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:07:26.439960   21098 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:26.439985   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:07:26.584141   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:07:26.584183   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:07:26.783005   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:26.907600   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:07:26.907633   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:07:27.113741   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.01702554s)
	I0831 22:07:27.113757   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.92136815s)
	I0831 22:07:27.113790   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.113800   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.113830   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.113849   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114071   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114123   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114136   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.114145   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114194   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114229   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114252   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114268   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.114277   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114475   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114488   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114509   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114523   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114580   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114592   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.185606   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.185631   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.185967   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.185985   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.328527   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:27.328551   21098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:07:27.420622   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:28.677844   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.383721569s)
	I0831 22:07:28.677898   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.677918   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678012   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464218982s)
	I0831 22:07:28.678051   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678062   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678125   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678139   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678148   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678155   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678124   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678363   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678382   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678392   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678399   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678411   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678423   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678427   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678445   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678604   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678634   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678641   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:30.778509   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:07:30.778553   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:30.781708   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:30.782089   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:30.782125   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:30.782277   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:30.782513   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:30.782693   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:30.782862   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:31.160940   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:07:31.262365   21098 addons.go:234] Setting addon gcp-auth=true in "addons-132210"
	I0831 22:07:31.262423   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:31.262727   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:31.262758   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:31.277512   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0831 22:07:31.277939   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:31.278419   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:31.278439   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:31.278698   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:31.279297   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:31.279351   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:31.294328   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I0831 22:07:31.294767   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:31.295196   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:31.295217   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:31.295567   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:31.295765   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:31.297275   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:31.297521   21098 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:07:31.297544   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:31.300179   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:31.300578   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:31.300608   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:31.300739   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:31.300921   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:31.301090   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:31.301236   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:32.605488   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.203266923s)
	I0831 22:07:32.605553   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.605587   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.605634   21098 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.156868741s)
	I0831 22:07:32.605738   21098 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.156819626s)
	I0831 22:07:32.605762   21098 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0831 22:07:32.606876   21098 node_ready.go:35] waiting up to 6m0s for node "addons-132210" to be "Ready" ...
	I0831 22:07:32.607056   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.089745734s)
	I0831 22:07:32.607084   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607095   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607118   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.016199589s)
	I0831 22:07:32.607152   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607164   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607211   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.845992141s)
	I0831 22:07:32.607230   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607245   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607248   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.702177169s)
	I0831 22:07:32.607264   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607279   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607359   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.591933506s)
	I0831 22:07:32.607385   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607396   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607840   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.569207893s)
	I0831 22:07:32.607890   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607912   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607980   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.607989   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608007   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608017   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608040   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.202125622s)
	W0831 22:07:32.608084   21098 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:32.608103   21098 retry.go:31] will retry after 213.169609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:32.608139   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608154   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608156   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608180   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608181   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608196   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608201   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608205   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608217   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608221   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.825175604s)
	I0831 22:07:32.608272   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608287   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608294   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608321   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608328   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608225   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608446   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608456   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608704   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608733   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608743   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608759   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608768   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.609174   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.609191   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.609201   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.609210   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608880   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.610038   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.610082   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.610099   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.610108   21098 addons.go:475] Verifying addon ingress=true in "addons-132210"
	I0831 22:07:32.610320   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.610332   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.610339   21098 addons.go:475] Verifying addon registry=true in "addons-132210"
	I0831 22:07:32.611022   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611037   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611103   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611228   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611256   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611264   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611281   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.611290   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.611294   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611320   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611347   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611356   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.611364   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.611744   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611769   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611785   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611796   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611796   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611805   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.612775   21098 out.go:177] * Verifying ingress addon...
	I0831 22:07:32.612947   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.612972   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.613355   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.613371   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.613380   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.612991   21098 out.go:177] * Verifying registry addon...
	I0831 22:07:32.613676   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.613692   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.613702   21098 addons.go:475] Verifying addon metrics-server=true in "addons-132210"
	I0831 22:07:32.613754   21098 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-132210 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:07:32.615291   21098 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:07:32.616400   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:07:32.633226   21098 node_ready.go:49] node "addons-132210" has status "Ready":"True"
	I0831 22:07:32.633254   21098 node_ready.go:38] duration metric: took 26.354748ms for node "addons-132210" to be "Ready" ...
	I0831 22:07:32.633267   21098 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:32.672510   21098 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:07:32.672535   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:32.672811   21098 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:07:32.672833   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:32.716505   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.716533   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.716849   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.716869   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.722171   21098 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.790958   21098 pod_ready.go:93] pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.790982   21098 pod_ready.go:82] duration metric: took 68.780152ms for pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.790998   21098 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.822430   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:32.843686   21098 pod_ready.go:93] pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.843710   21098 pod_ready.go:82] duration metric: took 52.705196ms for pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.843719   21098 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.894732   21098 pod_ready.go:93] pod "etcd-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.894755   21098 pod_ready.go:82] duration metric: took 51.029517ms for pod "etcd-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.894765   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.909271   21098 pod_ready.go:93] pod "kube-apiserver-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.909293   21098 pod_ready.go:82] duration metric: took 14.521596ms for pod "kube-apiserver-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.909302   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.013537   21098 pod_ready.go:93] pod "kube-controller-manager-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.013559   21098 pod_ready.go:82] duration metric: took 104.249609ms for pod "kube-controller-manager-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.013571   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pf4zb" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.127456   21098 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-132210" context rescaled to 1 replicas
	I0831 22:07:33.148736   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.257499   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.418853   21098 pod_ready.go:93] pod "kube-proxy-pf4zb" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.418877   21098 pod_ready.go:82] duration metric: took 405.299679ms for pod "kube-proxy-pf4zb" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.418890   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.854578   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.855771   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.865760   21098 pod_ready.go:93] pod "kube-scheduler-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.865782   21098 pod_ready.go:82] duration metric: took 446.884331ms for pod "kube-scheduler-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.865796   21098 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:34.148775   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.148849   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.303845   21098 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.006297628s)
	I0831 22:07:34.303848   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.883150423s)
	I0831 22:07:34.304054   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.304074   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.304425   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.304447   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.304456   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.304467   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.304698   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.304719   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.304743   21098 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-132210"
	I0831 22:07:34.304787   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.305581   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:34.306666   21098 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:07:34.308329   21098 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:07:34.309280   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:07:34.309726   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:07:34.309747   21098 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:07:34.329848   21098 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:07:34.329875   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.454442   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:07:34.454475   21098 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:07:34.518709   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:34.518732   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:07:34.575530   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:34.579667   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.757184457s)
	I0831 22:07:34.579722   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.579737   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.580030   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.580053   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.580073   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.580089   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.580102   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.580283   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.580308   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.580311   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.619308   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.620410   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.814548   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.120705   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.121027   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.313455   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.628958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.629640   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.874670   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.924472   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:35.964663   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.389094024s)
	I0831 22:07:35.964728   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:35.964747   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:35.965086   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:35.965129   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:35.965146   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:35.965161   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:35.965177   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:35.965478   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:35.965495   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:35.965500   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:35.967806   21098 addons.go:475] Verifying addon gcp-auth=true in "addons-132210"
	I0831 22:07:35.969545   21098 out.go:177] * Verifying gcp-auth addon...
	I0831 22:07:35.971896   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:07:35.999763   21098 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:07:35.999784   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:36.122605   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.123410   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.315123   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.475878   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:36.619752   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.620766   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.814203   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.975190   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:37.122336   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.122478   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.315341   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.475177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:37.620866   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.621439   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.814228   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.975613   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:38.120903   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.121229   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.314007   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.372392   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:38.475094   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:38.944270   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.944466   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.944638   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.977495   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:39.125969   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.126728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.313948   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.477476   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:39.620217   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.620445   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.814405   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.974903   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:40.121141   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.121755   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.314729   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.475251   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:40.620786   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.621250   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.814002   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.872198   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:41.005315   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:41.121910   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.122193   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.315886   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.476677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:41.621217   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.621565   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.823677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.977326   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:42.120209   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.120445   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.319015   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.476300   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:42.620896   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.621628   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.813805   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.872520   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:42.975650   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:43.119591   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.120374   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.316617   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.476126   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:43.619662   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.620425   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.815672   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.977099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:44.120689   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.120721   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.313640   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.474938   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:44.619883   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.620952   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.816734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.975512   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:45.119105   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.119826   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.313584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.380588   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:45.475926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:45.619771   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.620772   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.813745   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.975148   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.120296   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.120403   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.314008   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.475502   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.619407   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.619757   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.813669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.976377   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.121378   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.121861   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.320782   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.475797   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.620484   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.621120   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.817902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.873131   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:47.979915   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.120586   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.121010   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.314359   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.475174   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.620253   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.620967   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.813635   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.975699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.119734   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.120086   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.313782   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.475879   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.619985   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.621004   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.815468   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.873566   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:49.975581   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.120337   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.120541   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.314227   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.478135   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.622036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.622859   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.814060   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.975967   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.120306   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.121507   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:51.314547   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.475724   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.620114   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:51.620309   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.814022   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.976109   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.121801   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:52.122553   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.314307   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.372533   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:52.476431   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.619444   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.620536   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:52.814597   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.975521   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.120042   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.120210   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:53.314115   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.475728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.620177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:53.623813   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.814919   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.975959   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.120801   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.121168   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:54.315417   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.374460   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:54.476113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.619806   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.621022   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:54.815198   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.975080   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.120293   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.121322   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:55.314732   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.475687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.619856   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.620809   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:55.814765   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.975740   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.120854   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:56.121921   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.316560   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.475631   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.619589   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.620330   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:56.814597   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.872821   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:56.975866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.120787   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.120963   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:57.314895   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.476283   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.618831   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.620240   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:57.813768   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.975551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.121198   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:58.121479   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.314126   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.475209   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.620354   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.623406   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:58.817231   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.975135   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.120742   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.121902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:59.314224   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.372594   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:59.654374   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:59.654873   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.655101   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.814892   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.976412   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.121236   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:00.121952   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.314857   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.476585   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.620958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:00.621503   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.814717   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.975596   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.120556   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:01.121227   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.314332   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.373553   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:01.475855   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.620256   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:01.620695   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.817902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.976941   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.120512   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:02.120709   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.315631   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.475468   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.621509   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:02.621785   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.814806   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.976174   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.120440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:03.120863   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.313700   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.475835   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.619665   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.621704   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:03.814121   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.872588   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:03.975298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.120824   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:04.121184   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.314338   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.475429   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.620540   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.620584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:04.815162   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.976895   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.120594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:05.120730   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.315865   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.476472   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.619151   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.619193   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:05.814469   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.873045   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:05.976083   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.120276   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.121632   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:06.316445   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.476113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.619879   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.621235   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:06.817665   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.977266   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.121891   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:07.125370   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.314681   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.475319   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.622891   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:07.623130   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.815134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.975338   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.120092   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.121833   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:08.314857   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:08.372618   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:08.475633   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.620926   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.622347   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.022099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.022480   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.120725   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.120911   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.314632   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.476068   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.620093   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.621293   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.814918   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.982257   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.120692   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.121929   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:10.314650   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.475440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.621191   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:10.621624   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.814610   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.871823   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:10.975582   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.120349   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:11.121548   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.314255   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.475551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.619270   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.619644   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:11.813295   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.976245   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.121122   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:12.121879   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.314903   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.475397   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.620793   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:12.621162   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.814057   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.872130   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:12.975754   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.133769   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:13.134318   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.314790   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.477695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.622634   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:13.624847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.821501   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.976538   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.119646   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.120341   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:14.315173   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.475306   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.621185   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.621510   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:14.814467   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.872822   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:14.976294   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.120441   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:15.121127   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.315400   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.475388   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.620578   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.620953   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:15.813943   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.979488   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.121495   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:16.121576   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.314944   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.475455   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.620506   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.620558   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:16.813569   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.872856   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:16.975991   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.120803   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:17.125876   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.314160   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.475916   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.620075   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:17.621270   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.815155   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.981149   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.120629   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.120785   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:18.315019   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.476099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.620556   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.620934   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:18.814347   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.977438   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.120685   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:19.121338   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.315435   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.371445   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:19.475248   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.620321   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:19.620767   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.814394   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.975242   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.120360   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:20.120513   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.315529   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.484317   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.620297   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.620551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:20.814555   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.976127   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.120746   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.120965   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:21.315551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.372806   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:21.476774   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.620656   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.621401   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:21.814726   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.975838   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:22.122780   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.126273   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:22.314614   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.476790   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:22.619929   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.622675   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:22.814144   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.975643   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:23.119721   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.120559   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:23.315087   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.474923   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:23.619836   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.621736   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:23.813687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.871468   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:23.976699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:24.120045   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.123398   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:24.602840   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:24.603194   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.619810   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.621697   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:24.814715   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.975695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:25.120948   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:25.121392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.318299   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:25.476633   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:25.619392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.620445   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:25.814377   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:25.872649   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:25.976267   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:26.122178   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:26.122596   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:26.314825   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.474926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:26.620117   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:26.620392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:26.815236   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.976263   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:27.122244   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:27.126825   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:27.314503   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:27.475451   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:27.619077   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:27.620128   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:27.814505   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:27.976659   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:28.119847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:28.119956   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:28.315111   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:28.373901   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:28.477178   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:28.621847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:28.622419   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:28.814623   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:28.975971   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:29.120702   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:29.126856   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:29.333033   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:29.475641   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:29.620251   21098 kapi.go:107] duration metric: took 57.003845187s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:08:29.620894   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:29.813428   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:29.976100   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:30.120301   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:30.315054   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:30.475927   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:30.621321   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:30.816504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:30.873025   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:30.976290   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:31.120152   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:31.316147   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:31.476032   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:31.620260   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:31.816255   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:31.975740   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:32.122583   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:32.314298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:32.475815   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:32.620031   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:32.814337   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:32.873931   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:32.976076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:33.127234   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:33.313541   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:33.475361   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:33.619918   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:33.814036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:33.975222   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:34.119967   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:34.314700   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:34.476130   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:34.619753   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:34.815637   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:34.975904   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:35.119845   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:35.314907   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:35.372290   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:35.475061   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:35.620392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:35.814214   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:35.975293   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:36.120499   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:36.315134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:36.476924   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:36.625728   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:36.815568   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:36.975977   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:37.119760   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:37.314098   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:37.475403   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:37.619353   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:37.814409   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:37.872370   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:38.414352   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:38.422314   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:38.422534   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:38.475478   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:38.620548   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:38.814646   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:38.978424   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:39.120310   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:39.315834   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:39.476326   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:39.619867   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:39.813168   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:39.875054   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:39.983870   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:40.119802   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:40.381691   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:40.480228   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:40.621421   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:40.815148   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:40.975440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.119699   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:41.314866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:41.475833   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.619956   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:41.813677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:41.975111   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.121321   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:42.314456   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:42.372543   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:42.475460   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.619163   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:42.814929   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:42.975788   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.120305   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:43.314076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:43.475628   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.620272   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:43.822113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:43.976312   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.119884   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:44.319618   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:44.381557   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:44.477017   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.621506   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:44.826669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:44.976036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.123433   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:45.313890   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:45.476804   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.619848   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:45.813116   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:45.976701   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.119113   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:46.313958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:46.477472   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.620824   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:46.952945   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:46.956360   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:46.975185   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.120135   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:47.325549   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:47.476182   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.618992   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:47.815679   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:47.976615   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.119381   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:48.317018   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:48.476286   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.620330   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:48.814281   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:48.976023   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.119819   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:49.314898   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:49.372370   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:49.475523   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.679647   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:49.815584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:49.975653   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.119243   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:50.314821   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:50.493960   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.620412   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:50.814454   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:50.878784   21098 pod_ready.go:93] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"True"
	I0831 22:08:50.878806   21098 pod_ready.go:82] duration metric: took 1m17.013002962s for pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.878816   21098 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.884470   21098 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace has status "Ready":"True"
	I0831 22:08:50.884489   21098 pod_ready.go:82] duration metric: took 5.665136ms for pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.884509   21098 pod_ready.go:39] duration metric: took 1m18.251226521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:08:50.884533   21098 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:08:50.884580   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:08:50.884638   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:08:50.955600   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:08:50.955626   21098 cri.go:89] found id: ""
	I0831 22:08:50.955635   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:08:50.955684   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:50.971435   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:08:50.971500   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:08:50.979153   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:51.029305   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:08:51.029329   21098 cri.go:89] found id: ""
	I0831 22:08:51.029338   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:08:51.029396   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.033768   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:08:51.033831   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:08:51.108642   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:08:51.108669   21098 cri.go:89] found id: ""
	I0831 22:08:51.108680   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:08:51.108740   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.114938   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:08:51.115012   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:08:51.121354   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:51.227554   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:08:51.227577   21098 cri.go:89] found id: ""
	I0831 22:08:51.227585   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:08:51.227629   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.242323   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:08:51.242407   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:08:51.306299   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:08:51.306319   21098 cri.go:89] found id: ""
	I0831 22:08:51.306327   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:08:51.306389   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.316849   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:51.317332   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:08:51.317392   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:08:51.404448   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:08:51.404466   21098 cri.go:89] found id: ""
	I0831 22:08:51.404472   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:08:51.404524   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.411682   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:08:51.411753   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:08:51.468597   21098 cri.go:89] found id: ""
	I0831 22:08:51.468623   21098 logs.go:276] 0 containers: []
	W0831 22:08:51.468631   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:08:51.468639   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:08:51.468651   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 22:08:51.482196   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0831 22:08:51.533263   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.533431   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:51.533563   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.533721   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:51.545028   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.545188   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:08:51.564495   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:08:51.564525   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:08:51.624037   21098 kapi.go:107] duration metric: took 1m19.008743885s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:08:51.815909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:51.850860   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:08:51.850908   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:08:51.976237   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.014670   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:08:52.014708   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:08:52.123496   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:08:52.123543   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:08:52.174958   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:08:52.175006   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:08:52.267648   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:08:52.267686   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:08:52.313784   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:52.334510   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:08:52.334536   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:08:52.388833   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:08:52.388872   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:08:52.458242   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:08:52.458270   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:08:52.475384   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.552472   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:08:52.552502   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:08:52.850283   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:52.937891   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:08:52.937926   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:08:52.937989   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:08:52.938003   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:52.938015   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:52.938039   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:52.938050   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:08:52.938058   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:08:52.938065   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:08:52.938073   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:08:52.978298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.315067   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:53.475986   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.817131   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.151054   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.314831   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.476234   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.816394   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.975421   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.315703   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:55.482514   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.815728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:55.974892   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.314245   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:56.475975   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.814011   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:56.976504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.313628   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:57.475060   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.814335   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:57.976408   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.314175   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:58.475969   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.815045   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:58.975678   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.314157   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:59.475913   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.814537   21098 kapi.go:107] duration metric: took 1m25.505259155s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:08:59.976603   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.476062   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.976224   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.477863   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.975298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.476482   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.939628   21098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:09:02.961175   21098 api_server.go:72] duration metric: took 1m39.375038741s to wait for apiserver process to appear ...
	I0831 22:09:02.961200   21098 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:09:02.961237   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:09:02.961303   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:09:02.975877   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.999945   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:02.999964   21098 cri.go:89] found id: ""
	I0831 22:09:02.999971   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:09:03.000020   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.005045   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:09:03.005117   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:09:03.053454   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:03.053480   21098 cri.go:89] found id: ""
	I0831 22:09:03.053492   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:09:03.053548   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.057843   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:09:03.057918   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:09:03.102107   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:03.102134   21098 cri.go:89] found id: ""
	I0831 22:09:03.102144   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:09:03.102201   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.106758   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:09:03.106833   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:09:03.151303   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:03.151343   21098 cri.go:89] found id: ""
	I0831 22:09:03.151353   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:09:03.151431   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.155739   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:09:03.155817   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:09:03.212323   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:03.212348   21098 cri.go:89] found id: ""
	I0831 22:09:03.212357   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:09:03.212414   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.217064   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:09:03.217124   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:09:03.258208   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:03.258239   21098 cri.go:89] found id: ""
	I0831 22:09:03.258249   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:09:03.258311   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.262725   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:09:03.262794   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:09:03.304036   21098 cri.go:89] found id: ""
	I0831 22:09:03.304062   21098 logs.go:276] 0 containers: []
	W0831 22:09:03.304070   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:09:03.304077   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:09:03.304095   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:03.342633   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:09:03.342660   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:09:03.400297   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:09:03.400335   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:09:03.415806   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:09:03.415833   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:09:03.476498   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:03.538271   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:09:03.538303   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:03.602863   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:09:03.602897   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:03.663903   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:09:03.663936   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:03.737918   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:09:03.737948   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:03.788384   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:09:03.788419   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:09:03.838952   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.839121   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:03.839261   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.839450   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:03.850735   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.850895   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:03.871047   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:09:03.871072   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:03.931950   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:09:03.931983   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:09:03.975839   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.476679   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.492557   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:04.492594   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:09:04.492657   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:09:04.492672   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:04.492685   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:04.492696   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:04.492705   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:04.492716   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:04.492725   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:04.492737   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:09:04.975687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.475569   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.975871   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.476108   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.975461   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.476261   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.976037   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.475699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.975874   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.476000   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.975995   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:10.475521   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.195175   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.476002   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.975232   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.476158   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.975602   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.475134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.976504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.475926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.493799   21098 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0831 22:09:14.501337   21098 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0831 22:09:14.502516   21098 api_server.go:141] control plane version: v1.31.0
	I0831 22:09:14.502536   21098 api_server.go:131] duration metric: took 11.541329499s to wait for apiserver health ...
	I0831 22:09:14.502547   21098 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:09:14.502568   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:09:14.502621   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:09:14.542688   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:14.542712   21098 cri.go:89] found id: ""
	I0831 22:09:14.542721   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:09:14.542778   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.547207   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:09:14.547265   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:09:14.585253   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:14.585277   21098 cri.go:89] found id: ""
	I0831 22:09:14.585285   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:09:14.585348   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.589951   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:09:14.590001   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:09:14.634151   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:14.634171   21098 cri.go:89] found id: ""
	I0831 22:09:14.634178   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:09:14.634221   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.640116   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:09:14.640196   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:09:14.692606   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:14.692629   21098 cri.go:89] found id: ""
	I0831 22:09:14.692636   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:09:14.692684   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.699229   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:09:14.699294   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:09:14.736751   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:14.736777   21098 cri.go:89] found id: ""
	I0831 22:09:14.736785   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:09:14.736838   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.741521   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:09:14.741573   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:09:14.780419   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:14.780448   21098 cri.go:89] found id: ""
	I0831 22:09:14.780456   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:09:14.780501   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.785331   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:09:14.785397   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:09:14.832330   21098 cri.go:89] found id: ""
	I0831 22:09:14.832353   21098 logs.go:276] 0 containers: []
	W0831 22:09:14.832362   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:09:14.832371   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:09:14.832385   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:09:14.849233   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:09:14.849266   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:14.894187   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:09:14.894215   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:14.932967   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:09:14.933040   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:09:14.975669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.995013   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:09:14.995045   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:15.054114   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:09:15.054155   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:09:15.476598   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:15.938089   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:09:15.938136   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 22:09:15.975959   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0831 22:09:15.992400   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:15.992568   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:15.992739   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:15.992917   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.005184   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.005355   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:16.027347   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:09:16.027382   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:09:16.173595   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:09:16.173623   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:16.260126   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:09:16.260162   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:16.304110   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:09:16.304147   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:16.351377   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:16.351404   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:09:16.351460   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:09:16.351474   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.351483   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.351493   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.351510   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.351521   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:16.351531   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:16.351541   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:09:16.477457   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:16.975815   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.475770   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.979376   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.475592   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.976801   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.476121   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.977073   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.475240   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.976681   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.475484   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.976058   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.475479   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.975925   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.475911   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.976177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.475909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.975151   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.476109   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.975695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.362028   21098 system_pods.go:59] 18 kube-system pods found
	I0831 22:09:26.362061   21098 system_pods.go:61] "coredns-6f6b679f8f-fg5wn" [44101eb2-e5ab-4205-8770-fcd8e3e7c877] Running
	I0831 22:09:26.362066   21098 system_pods.go:61] "csi-hostpath-attacher-0" [d5e59cee-4aef-4a71-8e87-a17016deb8aa] Running
	I0831 22:09:26.362070   21098 system_pods.go:61] "csi-hostpath-resizer-0" [1472dd5a-623f-4e1b-bb88-aa9737965d61] Running
	I0831 22:09:26.362073   21098 system_pods.go:61] "csi-hostpathplugin-f9r7t" [c332f2e3-d867-4e1b-b27f-62b8ff234fb8] Running
	I0831 22:09:26.362077   21098 system_pods.go:61] "etcd-addons-132210" [78c4bd71-140b-49f9-8bc1-4b4e1f3e77e1] Running
	I0831 22:09:26.362080   21098 system_pods.go:61] "kube-apiserver-addons-132210" [266d225a-02ab-4449-bc78-88940e2e01be] Running
	I0831 22:09:26.362083   21098 system_pods.go:61] "kube-controller-manager-addons-132210" [efd3eb72-530e-4d83-9f80-ed4252c65edb] Running
	I0831 22:09:26.362086   21098 system_pods.go:61] "kube-ingress-dns-minikube" [0e0b7880-36a9-4588-b4f2-69ee4d28f341] Running
	I0831 22:09:26.362089   21098 system_pods.go:61] "kube-proxy-pf4zb" [d398a8b8-eef4-41b1-945b-bf73a594737e] Running
	I0831 22:09:26.362092   21098 system_pods.go:61] "kube-scheduler-addons-132210" [40d172ae-efff-4b60-b47f-86e58c381de7] Running
	I0831 22:09:26.362095   21098 system_pods.go:61] "metrics-server-84c5f94fbc-4mp2p" [9f5c8bca-8c7c-4216-b875-066e9a9fb36a] Running
	I0831 22:09:26.362099   21098 system_pods.go:61] "nvidia-device-plugin-daemonset-99v85" [54398aec-2cfe-4328-a845-e1bd4bbfc99f] Running
	I0831 22:09:26.362102   21098 system_pods.go:61] "registry-6fb4cdfc84-gxktn" [1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78] Running
	I0831 22:09:26.362105   21098 system_pods.go:61] "registry-proxy-n7rzz" [49867dc1-8d92-48f0-8c8b-50a65936ad12] Running
	I0831 22:09:26.362108   21098 system_pods.go:61] "snapshot-controller-56fcc65765-d8zmh" [842cfb93-bc24-4a0f-8191-8cff822e4981] Running
	I0831 22:09:26.362111   21098 system_pods.go:61] "snapshot-controller-56fcc65765-vz7w2" [879946b9-6f92-4ad5-8e18-84154122b30a] Running
	I0831 22:09:26.362115   21098 system_pods.go:61] "storage-provisioner" [7444df94-b591-414e-bb8f-6eecc8fb06c5] Running
	I0831 22:09:26.362119   21098 system_pods.go:61] "tiller-deploy-b48cc5f79-lljvg" [d3d10da4-8063-4e9f-a3a6-d02d24b61855] Running
	I0831 22:09:26.362128   21098 system_pods.go:74] duration metric: took 11.859574121s to wait for pod list to return data ...
	I0831 22:09:26.362140   21098 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:09:26.364694   21098 default_sa.go:45] found service account: "default"
	I0831 22:09:26.364718   21098 default_sa.go:55] duration metric: took 2.572024ms for default service account to be created ...
	I0831 22:09:26.364726   21098 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:09:26.371946   21098 system_pods.go:86] 18 kube-system pods found
	I0831 22:09:26.371979   21098 system_pods.go:89] "coredns-6f6b679f8f-fg5wn" [44101eb2-e5ab-4205-8770-fcd8e3e7c877] Running
	I0831 22:09:26.371985   21098 system_pods.go:89] "csi-hostpath-attacher-0" [d5e59cee-4aef-4a71-8e87-a17016deb8aa] Running
	I0831 22:09:26.371989   21098 system_pods.go:89] "csi-hostpath-resizer-0" [1472dd5a-623f-4e1b-bb88-aa9737965d61] Running
	I0831 22:09:26.371993   21098 system_pods.go:89] "csi-hostpathplugin-f9r7t" [c332f2e3-d867-4e1b-b27f-62b8ff234fb8] Running
	I0831 22:09:26.371997   21098 system_pods.go:89] "etcd-addons-132210" [78c4bd71-140b-49f9-8bc1-4b4e1f3e77e1] Running
	I0831 22:09:26.372000   21098 system_pods.go:89] "kube-apiserver-addons-132210" [266d225a-02ab-4449-bc78-88940e2e01be] Running
	I0831 22:09:26.372003   21098 system_pods.go:89] "kube-controller-manager-addons-132210" [efd3eb72-530e-4d83-9f80-ed4252c65edb] Running
	I0831 22:09:26.372007   21098 system_pods.go:89] "kube-ingress-dns-minikube" [0e0b7880-36a9-4588-b4f2-69ee4d28f341] Running
	I0831 22:09:26.372011   21098 system_pods.go:89] "kube-proxy-pf4zb" [d398a8b8-eef4-41b1-945b-bf73a594737e] Running
	I0831 22:09:26.372014   21098 system_pods.go:89] "kube-scheduler-addons-132210" [40d172ae-efff-4b60-b47f-86e58c381de7] Running
	I0831 22:09:26.372017   21098 system_pods.go:89] "metrics-server-84c5f94fbc-4mp2p" [9f5c8bca-8c7c-4216-b875-066e9a9fb36a] Running
	I0831 22:09:26.372020   21098 system_pods.go:89] "nvidia-device-plugin-daemonset-99v85" [54398aec-2cfe-4328-a845-e1bd4bbfc99f] Running
	I0831 22:09:26.372023   21098 system_pods.go:89] "registry-6fb4cdfc84-gxktn" [1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78] Running
	I0831 22:09:26.372046   21098 system_pods.go:89] "registry-proxy-n7rzz" [49867dc1-8d92-48f0-8c8b-50a65936ad12] Running
	I0831 22:09:26.372053   21098 system_pods.go:89] "snapshot-controller-56fcc65765-d8zmh" [842cfb93-bc24-4a0f-8191-8cff822e4981] Running
	I0831 22:09:26.372057   21098 system_pods.go:89] "snapshot-controller-56fcc65765-vz7w2" [879946b9-6f92-4ad5-8e18-84154122b30a] Running
	I0831 22:09:26.372060   21098 system_pods.go:89] "storage-provisioner" [7444df94-b591-414e-bb8f-6eecc8fb06c5] Running
	I0831 22:09:26.372063   21098 system_pods.go:89] "tiller-deploy-b48cc5f79-lljvg" [d3d10da4-8063-4e9f-a3a6-d02d24b61855] Running
	I0831 22:09:26.372068   21098 system_pods.go:126] duration metric: took 7.338208ms to wait for k8s-apps to be running ...
	I0831 22:09:26.372077   21098 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:09:26.372143   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:09:26.387943   21098 system_svc.go:56] duration metric: took 15.858116ms WaitForService to wait for kubelet
	I0831 22:09:26.387974   21098 kubeadm.go:582] duration metric: took 2m2.801840351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:09:26.387995   21098 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:09:26.390995   21098 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:09:26.391021   21098 node_conditions.go:123] node cpu capacity is 2
	I0831 22:09:26.391033   21098 node_conditions.go:105] duration metric: took 3.032634ms to run NodePressure ...
	I0831 22:09:26.391043   21098 start.go:241] waiting for startup goroutines ...
	I0831 22:09:26.475914   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.975777   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.476954   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.975206   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.476090   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.975734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.475698   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.976296   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.476559   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.975576   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.477596   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.975909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.475130   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.975291   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.476041   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.975866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.475356   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.976258   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.475594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.975538   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.475516   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.975882   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.475912   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.980397   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.476464   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.976629   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.476682   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.977594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.476050   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.975586   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.476076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.988997   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.475034   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.976591   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.476154   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.975736   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.476250   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.976670   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.476952   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.975160   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.475606   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.976118   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.476033   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.975996   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.475583   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.976184   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.475823   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.975703   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.476541   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.976407   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:51.476083   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:51.976078   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:52.475636   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:52.977028   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:53.475427   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:53.976231   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:54.475762   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:54.975423   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:55.480634   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:55.976191   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:56.475501   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:56.976688   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:57.477084   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:57.975727   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:58.476734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:58.975704   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:59.475793   21098 kapi.go:107] duration metric: took 2m23.503891799s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:09:59.477292   21098 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-132210 cluster.
	I0831 22:09:59.478644   21098 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:09:59.479814   21098 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:09:59.481180   21098 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, nvidia-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, helm-tiller, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0831 22:09:59.482381   21098 addons.go:510] duration metric: took 2m35.8961992s for enable addons: enabled=[cloud-spanner default-storageclass nvidia-device-plugin storage-provisioner ingress-dns inspektor-gadget helm-tiller metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0831 22:09:59.482411   21098 start.go:246] waiting for cluster config update ...
	I0831 22:09:59.482427   21098 start.go:255] writing updated cluster config ...
	I0831 22:09:59.482654   21098 ssh_runner.go:195] Run: rm -f paused
	I0831 22:09:59.531140   21098 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:09:59.533137   21098 out.go:177] * Done! kubectl is now configured to use "addons-132210" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.386392426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92fedb03-b40a-43f4-b9f6-5705537092a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.386447368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92fedb03-b40a-43f4-b9f6-5705537092a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.386982091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647dc8a8efce378bf77c9bb3ccfd1032b3cfc0d4c466d60f95cdaa01ce3a814,PodSandboxId:6dbbd72f7b24e166a28508118da63750187b16c3c003f0a6e423b4e6818c16cc,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1725142743360004765,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d18e18a-4d3d-4c7e-8b3e-a2a83741bcf0,},Annotations:map[string]string{
io.kubernetes.container.hash: a6a7e31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1f234b86f6e100ecb4109b712dc43ae704680587d59276078fc708a0fdacee,PodSandboxId:5edad4f535a7184d02e4d23049f3266b5279747f5caff1385bb20ed27d3c5af0,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725142695025422568,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c9deda7c-9530-4d83-a1d4-59d407b5efbb,},Annotations:map
[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68,PodSandboxId:c782e78a6cb82c6fd4b668c72fa43f2bf46e60704340c34d868ff13402351ad8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725142131176789833,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57
996ff-vtskh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e462e5dd-0936-4ad8-bbf2-8be4b08ede14,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d,PodSandboxId:1112f04477239476ea91fec81c7f9ba331f6888492941361381dcc822fc0c767,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@
sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142108404872769,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wr2c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b310420-abf8-48e1-8b44-b000e6d4e2de,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18,PodSandboxId:7f4d1f645053746ac9abd9874df3926c878a72503fbde5c511cc06b05006c8b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io
/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142093694602877,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lffjf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e70949d6-004f-45a1-95b4-cda03aefe9de,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcc225b594b08351615c8fee416ec6f6451bcae82902608a9a1a2115f0617d9a,PodSandboxId:a2b18ec87a803c57f7a5351446310cfd0589c7fde75f6ee4fd95e9cbeab98353,Metadata:&ContainerMetadata{Name:nvidia-device-plugi
n-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725142081177435045,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-99v85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54398aec-2cfe-4328-a845-e1bd4bbfc99f,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271,PodSandboxId:fbb3d6047448c063f2edf44774dcba73f2ebdee6bc83813f32d71b96cc0390
6a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725142070383456901,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e0b7880-36a9-4588-b4f2-69ee4d28f341,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef
4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\"
:53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f40292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[
string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf
7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92fedb03-b40a-43f4-b9f6-5705537092a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.432700233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23f538b9-ff93-4f0c-bdab-3b61f3e120ac name=/runtime.v1.RuntimeService/Version
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.432775968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23f538b9-ff93-4f0c-bdab-3b61f3e120ac name=/runtime.v1.RuntimeService/Version
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.434614522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8195597c-4c0f-4d11-81eb-f29b3d745b4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.436228137Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142755436200016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:540276,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8195597c-4c0f-4d11-81eb-f29b3d745b4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.436809421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dd82a14-de5e-4fe7-b7f8-c3a4d1764e71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.436879004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dd82a14-de5e-4fe7-b7f8-c3a4d1764e71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.437279383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647dc8a8efce378bf77c9bb3ccfd1032b3cfc0d4c466d60f95cdaa01ce3a814,PodSandboxId:6dbbd72f7b24e166a28508118da63750187b16c3c003f0a6e423b4e6818c16cc,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1725142743360004765,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d18e18a-4d3d-4c7e-8b3e-a2a83741bcf0,},Annotations:map[string]string{
io.kubernetes.container.hash: a6a7e31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1f234b86f6e100ecb4109b712dc43ae704680587d59276078fc708a0fdacee,PodSandboxId:5edad4f535a7184d02e4d23049f3266b5279747f5caff1385bb20ed27d3c5af0,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725142695025422568,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c9deda7c-9530-4d83-a1d4-59d407b5efbb,},Annotations:map
[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68,PodSandboxId:c782e78a6cb82c6fd4b668c72fa43f2bf46e60704340c34d868ff13402351ad8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725142131176789833,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57
996ff-vtskh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e462e5dd-0936-4ad8-bbf2-8be4b08ede14,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d,PodSandboxId:1112f04477239476ea91fec81c7f9ba331f6888492941361381dcc822fc0c767,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@
sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142108404872769,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wr2c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b310420-abf8-48e1-8b44-b000e6d4e2de,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18,PodSandboxId:7f4d1f645053746ac9abd9874df3926c878a72503fbde5c511cc06b05006c8b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io
/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142093694602877,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lffjf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e70949d6-004f-45a1-95b4-cda03aefe9de,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcc225b594b08351615c8fee416ec6f6451bcae82902608a9a1a2115f0617d9a,PodSandboxId:a2b18ec87a803c57f7a5351446310cfd0589c7fde75f6ee4fd95e9cbeab98353,Metadata:&ContainerMetadata{Name:nvidia-device-plugi
n-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725142081177435045,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-99v85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54398aec-2cfe-4328-a845-e1bd4bbfc99f,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271,PodSandboxId:fbb3d6047448c063f2edf44774dcba73f2ebdee6bc83813f32d71b96cc0390
6a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725142070383456901,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e0b7880-36a9-4588-b4f2-69ee4d28f341,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef
4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\"
:53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f40292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[
string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf
7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dd82a14-de5e-4fe7-b7f8-c3a4d1764e71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.449161649Z" level=debug msg="Detected compression format gzip" file="compression/compression.go:126"
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.449218673Z" level=debug msg="Using original blob without modification" file="copy/compression.go:226"
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.449331197Z" level=debug msg="ImagePull (0): docker.io/library/nginx:alpine (sha256:41c49cbde6a69c2861d4443a90e47a59e906386088b706d32aba1091d0f262b0): 0 bytes (0.00%)" file="server/image_pull.go:276" id=7006b0f7-44f0-404e-9f4a-79bdb3e415fe name=/runtime.v1.ImageService/PullImage
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.464240075Z" level=debug msg="Checking if we can reuse blob sha256:9da224fdd4124c20879a425f59ee3d7e9aeccf37356692f37cd7736e38c2efd2: general substitution = true, compression for MIME type \"application/vnd.oci.image.layer.v1.tar+gzip\" = true" file="copy/single.go:681"
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.464263920Z" level=debug msg="ImagePull (2): docker.io/library/nginx:alpine (sha256:41c49cbde6a69c2861d4443a90e47a59e906386088b706d32aba1091d0f262b0): 1210 bytes (100.00%)" file="server/image_pull.go:276" id=7006b0f7-44f0-404e-9f4a-79bdb3e415fe name=/runtime.v1.ImageService/PullImage
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.464595973Z" level=debug msg="Failed to retrieve partial blob: convert_images not configured" file="copy/single.go:756"
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.464671752Z" level=debug msg="Downloading /v2/library/nginx/blobs/sha256:9da224fdd4124c20879a425f59ee3d7e9aeccf37356692f37cd7736e38c2efd2" file="docker/docker_client.go:1038"
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.464957947Z" level=debug msg="GET https://registry-1.docker.io/v2/library/nginx/blobs/sha256:9da224fdd4124c20879a425f59ee3d7e9aeccf37356692f37cd7736e38c2efd2" file="docker/docker_client.go:631"
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.477849991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3101f5d2-9762-42e1-82ff-eccdba63961d name=/runtime.v1.RuntimeService/Version
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.477971213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3101f5d2-9762-42e1-82ff-eccdba63961d name=/runtime.v1.RuntimeService/Version
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.479288436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=384fc7d4-80dc-4dd2-8af1-c7a0239f9490 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.480588729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142755480559580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:540276,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=384fc7d4-80dc-4dd2-8af1-c7a0239f9490 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.481584094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8852e385-2cd4-4e4e-96a3-2f0bcd991a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.481637969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8852e385-2cd4-4e4e-96a3-2f0bcd991a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:19:15 addons-132210 crio[663]: time="2024-08-31 22:19:15.482198266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647dc8a8efce378bf77c9bb3ccfd1032b3cfc0d4c466d60f95cdaa01ce3a814,PodSandboxId:6dbbd72f7b24e166a28508118da63750187b16c3c003f0a6e423b4e6818c16cc,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1725142743360004765,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d18e18a-4d3d-4c7e-8b3e-a2a83741bcf0,},Annotations:map[string]string{
io.kubernetes.container.hash: a6a7e31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1f234b86f6e100ecb4109b712dc43ae704680587d59276078fc708a0fdacee,PodSandboxId:5edad4f535a7184d02e4d23049f3266b5279747f5caff1385bb20ed27d3c5af0,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725142695025422568,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c9deda7c-9530-4d83-a1d4-59d407b5efbb,},Annotations:map
[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68,PodSandboxId:c782e78a6cb82c6fd4b668c72fa43f2bf46e60704340c34d868ff13402351ad8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725142131176789833,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57
996ff-vtskh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e462e5dd-0936-4ad8-bbf2-8be4b08ede14,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d,PodSandboxId:1112f04477239476ea91fec81c7f9ba331f6888492941361381dcc822fc0c767,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@
sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142108404872769,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wr2c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b310420-abf8-48e1-8b44-b000e6d4e2de,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18,PodSandboxId:7f4d1f645053746ac9abd9874df3926c878a72503fbde5c511cc06b05006c8b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io
/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142093694602877,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lffjf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e70949d6-004f-45a1-95b4-cda03aefe9de,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcc225b594b08351615c8fee416ec6f6451bcae82902608a9a1a2115f0617d9a,PodSandboxId:a2b18ec87a803c57f7a5351446310cfd0589c7fde75f6ee4fd95e9cbeab98353,Metadata:&ContainerMetadata{Name:nvidia-device-plugi
n-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725142081177435045,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-99v85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54398aec-2cfe-4328-a845-e1bd4bbfc99f,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271,PodSandboxId:fbb3d6047448c063f2edf44774dcba73f2ebdee6bc83813f32d71b96cc0390
6a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725142070383456901,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e0b7880-36a9-4588-b4f2-69ee4d28f341,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef
4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\"
:53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f40292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[
string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf
7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8852e385-2cd4-4e4e-96a3-2f0bcd991a1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	e8dcfee929f65       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 seconds ago        Running             headlamp                   0                   dc2ee3e74ad94       headlamp-57fb76fcdb-zb4l7
	a647dc8a8efce       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                12 seconds ago       Exited              helm-test                  0                   6dbbd72f7b24e       helm-test
	dc1f234b86f6e       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             About a minute ago   Exited              helper-pod                 0                   5edad4f535a71       helper-pod-delete-pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd
	a5e788d23e628       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago        Running             gcp-auth                   0                   a65bfb6d507f4       gcp-auth-89d5ffd79-6n2z6
	ea07f9fc27ba4       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             10 minutes ago       Running             controller                 0                   c782e78a6cb82       ingress-nginx-controller-bc57996ff-vtskh
	03905d71943c4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago       Exited              patch                      0                   1112f04477239       ingress-nginx-admission-patch-5wr2c
	833daa1d9c053       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago       Exited              create                     0                   7f4d1f6450537       ingress-nginx-admission-create-lffjf
	fcc225b594b08       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     11 minutes ago       Running             nvidia-device-plugin-ctr   0                   a2b18ec87a803       nvidia-device-plugin-daemonset-99v85
	b3036ecaa0d68       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             11 minutes ago       Running             minikube-ingress-dns       0                   fbb3d6047448c       kube-ingress-dns-minikube
	7ef4a6c40dbe3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago       Running             metrics-server             0                   c04f5bd826354       metrics-server-84c5f94fbc-4mp2p
	0b70bc07a6fec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             11 minutes ago       Running             storage-provisioner        0                   e7805858822ce       storage-provisioner
	8bb7c1b21e074       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             11 minutes ago       Running             coredns                    0                   c9d76344783a2       coredns-6f6b679f8f-fg5wn
	dc9d1779c9ec0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             11 minutes ago       Running             kube-proxy                 0                   cd53e58a6020b       kube-proxy-pf4zb
	88f24112cdf2e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             12 minutes ago       Running             kube-controller-manager    0                   1cce6cbc6a4fa       kube-controller-manager-addons-132210
	d5a6630200902       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             12 minutes ago       Running             kube-apiserver             0                   e2253778a2445       kube-apiserver-addons-132210
	9e07eecb0bd41       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago       Running             etcd                       0                   54cbd2b4b9e2e       etcd-addons-132210
	ea40b4dfb934e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             12 minutes ago       Running             kube-scheduler             0                   3f1a88db7a62d       kube-scheduler-addons-132210
	
	
	==> coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] <==
	[INFO] 10.244.0.8:59871 - 44836 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110421s
	[INFO] 10.244.0.8:33356 - 42014 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135624s
	[INFO] 10.244.0.8:33356 - 8221 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000242056s
	[INFO] 10.244.0.8:35585 - 13377 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041231s
	[INFO] 10.244.0.8:35585 - 3142 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000183049s
	[INFO] 10.244.0.8:47934 - 56724 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038372s
	[INFO] 10.244.0.8:47934 - 6297 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105405s
	[INFO] 10.244.0.8:48416 - 43339 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095854s
	[INFO] 10.244.0.8:48416 - 20808 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089325s
	[INFO] 10.244.0.8:60809 - 24507 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090972s
	[INFO] 10.244.0.8:60809 - 27316 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000241444s
	[INFO] 10.244.0.8:39141 - 61060 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013393s
	[INFO] 10.244.0.8:39141 - 6786 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000294732s
	[INFO] 10.244.0.8:47336 - 11940 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039145s
	[INFO] 10.244.0.8:47336 - 21158 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101081s
	[INFO] 10.244.0.8:36849 - 58078 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000195322s
	[INFO] 10.244.0.8:36849 - 19164 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000290198s
	[INFO] 10.244.0.22:57715 - 978 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363634s
	[INFO] 10.244.0.22:36290 - 10290 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000102337s
	[INFO] 10.244.0.22:59607 - 56162 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068575s
	[INFO] 10.244.0.22:57832 - 20486 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115987s
	[INFO] 10.244.0.22:47101 - 58158 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072188s
	[INFO] 10.244.0.22:54115 - 35881 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059499s
	[INFO] 10.244.0.22:38928 - 44111 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.003739828s
	[INFO] 10.244.0.22:51045 - 42584 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003766695s
	
	
	==> describe nodes <==
	Name:               addons-132210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-132210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-132210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_07_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-132210
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:07:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-132210
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:19:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:18:53 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:18:53 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:18:53 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:18:53 +0000   Sat, 31 Aug 2024 22:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    addons-132210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 12c3f930f06943eb9eedcbe740b437c1
	  System UUID:                12c3f930-f069-43eb-9eed-cbe740b437c1
	  Boot ID:                    0c2dfdc3-b8db-4280-8b08-729176a830ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  gcp-auth                    gcp-auth-89d5ffd79-6n2z6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-57fb76fcdb-zb4l7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vtskh    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-fg5wn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-132210                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-132210                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-132210       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-pf4zb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-132210                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-4mp2p             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-99v85        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node addons-132210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node addons-132210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node addons-132210 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m   kubelet          Node addons-132210 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node addons-132210 event: Registered Node addons-132210 in Controller
	
	
	==> dmesg <==
	[  +5.037936] kauditd_printk_skb: 61 callbacks suppressed
	[ +10.425627] kauditd_printk_skb: 9 callbacks suppressed
	[Aug31 22:08] kauditd_printk_skb: 41 callbacks suppressed
	[ +10.213253] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.886474] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.896138] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.604296] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.756625] kauditd_printk_skb: 12 callbacks suppressed
	[Aug31 22:09] kauditd_printk_skb: 12 callbacks suppressed
	[ +32.975043] kauditd_printk_skb: 32 callbacks suppressed
	[ +15.460927] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.545206] kauditd_printk_skb: 2 callbacks suppressed
	[Aug31 22:10] kauditd_printk_skb: 9 callbacks suppressed
	[Aug31 22:11] kauditd_printk_skb: 28 callbacks suppressed
	[Aug31 22:14] kauditd_printk_skb: 28 callbacks suppressed
	[Aug31 22:18] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.322338] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.045430] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.602574] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.435262] kauditd_printk_skb: 2 callbacks suppressed
	[ +19.828071] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.293926] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.470597] kauditd_printk_skb: 6 callbacks suppressed
	[Aug31 22:19] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.179886] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] <==
	{"level":"warn","ts":"2024-08-31T22:08:38.406522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.287295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:38.406577Z","caller":"traceutil/trace.go:171","msg":"trace[1395541970] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"105.345528ms","start":"2024-08-31T22:08:38.301224Z","end":"2024-08-31T22:08:38.406569Z","steps":["trace[1395541970] 'agreement among raft nodes before linearized reading'  (duration: 105.278207ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:40.364527Z","caller":"traceutil/trace.go:171","msg":"trace[449812390] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"147.838759ms","start":"2024-08-31T22:08:40.216672Z","end":"2024-08-31T22:08:40.364511Z","steps":["trace[449812390] 'process raft request'  (duration: 147.722077ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:46.932596Z","caller":"traceutil/trace.go:171","msg":"trace[1095917297] linearizableReadLoop","detail":"{readStateIndex:1115; appliedIndex:1114; }","duration":"131.800969ms","start":"2024-08-31T22:08:46.800782Z","end":"2024-08-31T22:08:46.932583Z","steps":["trace[1095917297] 'read index received'  (duration: 131.639117ms)","trace[1095917297] 'applied index is now lower than readState.Index'  (duration: 161.4µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:08:46.932831Z","caller":"traceutil/trace.go:171","msg":"trace[219276231] transaction","detail":"{read_only:false; response_revision:1084; number_of_response:1; }","duration":"225.630773ms","start":"2024-08-31T22:08:46.707192Z","end":"2024-08-31T22:08:46.932823Z","steps":["trace[219276231] 'process raft request'  (duration: 225.308004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.268644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933105Z","caller":"traceutil/trace.go:171","msg":"trace[850287395] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1084; }","duration":"132.320492ms","start":"2024-08-31T22:08:46.800778Z","end":"2024-08-31T22:08:46.933098Z","steps":["trace[850287395] 'agreement among raft nodes before linearized reading'  (duration: 132.252602ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.180196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933247Z","caller":"traceutil/trace.go:171","msg":"trace[660106792] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1084; }","duration":"123.218846ms","start":"2024-08-31T22:08:46.810023Z","end":"2024-08-31T22:08:46.933242Z","steps":["trace[660106792] 'agreement among raft nodes before linearized reading'  (duration: 123.16896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.542858ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933623Z","caller":"traceutil/trace.go:171","msg":"trace[330872549] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1084; }","duration":"108.584553ms","start":"2024-08-31T22:08:46.825032Z","end":"2024-08-31T22:08:46.933616Z","steps":["trace[330872549] 'agreement among raft nodes before linearized reading'  (duration: 108.535322ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:49.655357Z","caller":"traceutil/trace.go:171","msg":"trace[165319690] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"136.350729ms","start":"2024-08-31T22:08:49.518991Z","end":"2024-08-31T22:08:49.655342Z","steps":["trace[165319690] 'process raft request'  (duration: 136.128055ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:49.661370Z","caller":"traceutil/trace.go:171","msg":"trace[1593117983] transaction","detail":"{read_only:false; response_revision:1101; number_of_response:1; }","duration":"135.493651ms","start":"2024-08-31T22:08:49.525861Z","end":"2024-08-31T22:08:49.661354Z","steps":["trace[1593117983] 'process raft request'  (duration: 134.988688ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:54.136074Z","caller":"traceutil/trace.go:171","msg":"trace[1104677073] linearizableReadLoop","detail":"{readStateIndex:1165; appliedIndex:1164; }","duration":"172.969109ms","start":"2024-08-31T22:08:53.963035Z","end":"2024-08-31T22:08:54.136004Z","steps":["trace[1104677073] 'read index received'  (duration: 170.41125ms)","trace[1104677073] 'applied index is now lower than readState.Index'  (duration: 2.557067ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-31T22:08:54.136319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.226891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:54.136413Z","caller":"traceutil/trace.go:171","msg":"trace[851686441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"173.346413ms","start":"2024-08-31T22:08:53.963007Z","end":"2024-08-31T22:08:54.136353Z","steps":["trace[851686441] 'agreement among raft nodes before linearized reading'  (duration: 173.201927ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:09:11.180801Z","caller":"traceutil/trace.go:171","msg":"trace[143927082] linearizableReadLoop","detail":"{readStateIndex:1232; appliedIndex:1231; }","duration":"217.79961ms","start":"2024-08-31T22:09:10.962974Z","end":"2024-08-31T22:09:11.180774Z","steps":["trace[143927082] 'read index received'  (duration: 217.657091ms)","trace[143927082] 'applied index is now lower than readState.Index'  (duration: 142.006µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:09:11.180954Z","caller":"traceutil/trace.go:171","msg":"trace[41968220] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"247.07813ms","start":"2024-08-31T22:09:10.933868Z","end":"2024-08-31T22:09:11.180946Z","steps":["trace[41968220] 'process raft request'  (duration: 246.800851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:09:11.181156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.482027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"warn","ts":"2024-08-31T22:09:11.181231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.247568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:09:11.181305Z","caller":"traceutil/trace.go:171","msg":"trace[497721371] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1196; }","duration":"218.327497ms","start":"2024-08-31T22:09:10.962970Z","end":"2024-08-31T22:09:11.181277Z","steps":["trace[497721371] 'agreement among raft nodes before linearized reading'  (duration: 218.228122ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:09:11.181240Z","caller":"traceutil/trace.go:171","msg":"trace[450022890] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1196; }","duration":"132.57275ms","start":"2024-08-31T22:09:11.048648Z","end":"2024-08-31T22:09:11.181221Z","steps":["trace[450022890] 'agreement among raft nodes before linearized reading'  (duration: 132.417556ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:17:14.568202Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1526}
	{"level":"info","ts":"2024-08-31T22:17:14.607762Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1526,"took":"38.547549ms","hash":33265301,"current-db-size-bytes":6266880,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3313664,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-08-31T22:17:14.607883Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":33265301,"revision":1526,"compact-revision":-1}
	
	
	==> gcp-auth [a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0] <==
	2024/08/31 22:09:59 Ready to write response ...
	2024/08/31 22:09:59 Ready to marshal response ...
	2024/08/31 22:09:59 Ready to write response ...
	2024/08/31 22:18:02 Ready to marshal response ...
	2024/08/31 22:18:02 Ready to write response ...
	2024/08/31 22:18:02 Ready to marshal response ...
	2024/08/31 22:18:02 Ready to write response ...
	2024/08/31 22:18:13 Ready to marshal response ...
	2024/08/31 22:18:13 Ready to write response ...
	2024/08/31 22:18:14 Ready to marshal response ...
	2024/08/31 22:18:14 Ready to write response ...
	2024/08/31 22:18:18 Ready to marshal response ...
	2024/08/31 22:18:18 Ready to write response ...
	2024/08/31 22:18:38 Ready to marshal response ...
	2024/08/31 22:18:38 Ready to write response ...
	2024/08/31 22:18:59 Ready to marshal response ...
	2024/08/31 22:18:59 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:10 Ready to marshal response ...
	2024/08/31 22:19:10 Ready to write response ...
	
	
	==> kernel <==
	 22:19:16 up 12 min,  0 users,  load average: 1.30, 0.69, 0.50
	Linux addons-132210 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] <==
	W0831 22:08:55.517555       1 handler_proxy.go:99] no RequestInfo found in the context
	E0831 22:08:55.517599       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0831 22:08:55.517717       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.101.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.101.143:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I0831 22:08:55.540025       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0831 22:18:30.350419       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0831 22:18:31.825010       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:18:54.356239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.356735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.443718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.443780       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.469865       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.470384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.501684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.501737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:18:55.472783       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:18:55.502100       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0831 22:18:55.519265       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0831 22:19:04.668043       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0831 22:19:05.793178       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0831 22:19:06.516419       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.208.123"}
	I0831 22:19:10.568572       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0831 22:19:10.763197       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.55.157"}
	
	
	==> kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] <==
	I0831 22:19:03.136433       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0831 22:19:04.364079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:04.364135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:19:05.603967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="7.672µs"
	E0831 22:19:05.794780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:19:06.591523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="50.045466ms"
	I0831 22:19:06.619980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="26.639253ms"
	I0831 22:19:06.621779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="76.791µs"
	W0831 22:19:07.182442       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:07.182552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:08.842657       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:08.842799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:13.493775       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:13.493829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:19:14.170027       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="66.763µs"
	I0831 22:19:14.231923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="23.444585ms"
	I0831 22:19:14.234724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="52.829µs"
	I0831 22:19:14.254578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="5.116µs"
	I0831 22:19:14.955230       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0831 22:19:15.061532       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:15.061602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:15.169875       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:15.169954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:15.610873       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:15.610954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:07:25.903033       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:07:25.911310       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0831 22:07:25.911403       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:07:25.982344       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:07:25.982403       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:07:25.982435       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:07:25.985880       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:07:25.986197       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:07:25.986208       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:07:25.987987       1 config.go:197] "Starting service config controller"
	I0831 22:07:25.988004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:07:25.988023       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:07:25.988027       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:07:25.988362       1 config.go:326] "Starting node config controller"
	I0831 22:07:25.988369       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:07:26.089133       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:07:26.089163       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:07:26.089183       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] <==
	E0831 22:07:16.254732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0831 22:07:16.241012       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:07:17.051102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:07:17.051135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.097676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:07:17.097729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.116710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:07:17.116759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.238680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:07:17.238731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.308444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:07:17.308680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.361218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:07:17.361749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.445778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:07:17.445880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.451014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:07:17.451126       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:07:17.464610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:07:17.464787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.482630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:07:17.482757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.545180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:07:17.545318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:07:19.433315       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:19:13 addons-132210 kubelet[1197]: I0831 22:19:13.811013    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ac72956e-ad77-40ef-8bcc-aa0ca381bd62-gcp-creds\") pod \"ac72956e-ad77-40ef-8bcc-aa0ca381bd62\" (UID: \"ac72956e-ad77-40ef-8bcc-aa0ca381bd62\") "
	Aug 31 22:19:13 addons-132210 kubelet[1197]: I0831 22:19:13.811079    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq7vg\" (UniqueName: \"kubernetes.io/projected/ac72956e-ad77-40ef-8bcc-aa0ca381bd62-kube-api-access-tq7vg\") pod \"ac72956e-ad77-40ef-8bcc-aa0ca381bd62\" (UID: \"ac72956e-ad77-40ef-8bcc-aa0ca381bd62\") "
	Aug 31 22:19:13 addons-132210 kubelet[1197]: I0831 22:19:13.811492    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac72956e-ad77-40ef-8bcc-aa0ca381bd62-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ac72956e-ad77-40ef-8bcc-aa0ca381bd62" (UID: "ac72956e-ad77-40ef-8bcc-aa0ca381bd62"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 31 22:19:13 addons-132210 kubelet[1197]: I0831 22:19:13.813058    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac72956e-ad77-40ef-8bcc-aa0ca381bd62-kube-api-access-tq7vg" (OuterVolumeSpecName: "kube-api-access-tq7vg") pod "ac72956e-ad77-40ef-8bcc-aa0ca381bd62" (UID: "ac72956e-ad77-40ef-8bcc-aa0ca381bd62"). InnerVolumeSpecName "kube-api-access-tq7vg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:19:13 addons-132210 kubelet[1197]: I0831 22:19:13.911533    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tq7vg\" (UniqueName: \"kubernetes.io/projected/ac72956e-ad77-40ef-8bcc-aa0ca381bd62-kube-api-access-tq7vg\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:19:13 addons-132210 kubelet[1197]: I0831 22:19:13.911586    1197 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ac72956e-ad77-40ef-8bcc-aa0ca381bd62-gcp-creds\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.205420    1197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-57fb76fcdb-zb4l7" podStartSLOduration=2.7944589349999998 podStartE2EDuration="8.205380634s" podCreationTimestamp="2024-08-31 22:19:06 +0000 UTC" firstStartedPulling="2024-08-31 22:19:07.730382632 +0000 UTC m=+709.226344136" lastFinishedPulling="2024-08-31 22:19:13.14130433 +0000 UTC m=+714.637265835" observedRunningTime="2024-08-31 22:19:14.171679505 +0000 UTC m=+715.667641028" watchObservedRunningTime="2024-08-31 22:19:14.205380634 +0000 UTC m=+715.701342157"
	Aug 31 22:19:14 addons-132210 kubelet[1197]: E0831 22:19:14.671713    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4d4c7d4f-e101-4a1a-8b8f-6d8a0cd8de3f"
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.679807    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac72956e-ad77-40ef-8bcc-aa0ca381bd62" path="/var/lib/kubelet/pods/ac72956e-ad77-40ef-8bcc-aa0ca381bd62/volumes"
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.717311    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rts5c\" (UniqueName: \"kubernetes.io/projected/1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78-kube-api-access-rts5c\") pod \"1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78\" (UID: \"1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78\") "
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.721114    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78-kube-api-access-rts5c" (OuterVolumeSpecName: "kube-api-access-rts5c") pod "1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78" (UID: "1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78"). InnerVolumeSpecName "kube-api-access-rts5c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.818781    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmbbc\" (UniqueName: \"kubernetes.io/projected/49867dc1-8d92-48f0-8c8b-50a65936ad12-kube-api-access-lmbbc\") pod \"49867dc1-8d92-48f0-8c8b-50a65936ad12\" (UID: \"49867dc1-8d92-48f0-8c8b-50a65936ad12\") "
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.819831    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rts5c\" (UniqueName: \"kubernetes.io/projected/1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78-kube-api-access-rts5c\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.821863    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49867dc1-8d92-48f0-8c8b-50a65936ad12-kube-api-access-lmbbc" (OuterVolumeSpecName: "kube-api-access-lmbbc") pod "49867dc1-8d92-48f0-8c8b-50a65936ad12" (UID: "49867dc1-8d92-48f0-8c8b-50a65936ad12"). InnerVolumeSpecName "kube-api-access-lmbbc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:19:14 addons-132210 kubelet[1197]: I0831 22:19:14.920686    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lmbbc\" (UniqueName: \"kubernetes.io/projected/49867dc1-8d92-48f0-8c8b-50a65936ad12-kube-api-access-lmbbc\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:19:15 addons-132210 kubelet[1197]: I0831 22:19:15.173368    1197 scope.go:117] "RemoveContainer" containerID="f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f"
	Aug 31 22:19:15 addons-132210 kubelet[1197]: I0831 22:19:15.232266    1197 scope.go:117] "RemoveContainer" containerID="f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f"
	Aug 31 22:19:15 addons-132210 kubelet[1197]: E0831 22:19:15.232779    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f\": container with ID starting with f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f not found: ID does not exist" containerID="f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f"
	Aug 31 22:19:15 addons-132210 kubelet[1197]: I0831 22:19:15.232808    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f"} err="failed to get container status \"f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f\": rpc error: code = NotFound desc = could not find container \"f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f\": container with ID starting with f49d2490c97db143fbc2401f2ddca728a01e16ddc81f589d01693e7b5164b32f not found: ID does not exist"
	Aug 31 22:19:15 addons-132210 kubelet[1197]: I0831 22:19:15.232830    1197 scope.go:117] "RemoveContainer" containerID="7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71"
	Aug 31 22:19:15 addons-132210 kubelet[1197]: I0831 22:19:15.254008    1197 scope.go:117] "RemoveContainer" containerID="7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71"
	Aug 31 22:19:15 addons-132210 kubelet[1197]: E0831 22:19:15.256201    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71\": container with ID starting with 7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71 not found: ID does not exist" containerID="7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71"
	Aug 31 22:19:15 addons-132210 kubelet[1197]: I0831 22:19:15.256253    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71"} err="failed to get container status \"7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71\": rpc error: code = NotFound desc = could not find container \"7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71\": container with ID starting with 7424b164282f17393b0df7c9eb32f8c30af66f406a3cf3deec00ed76f5bebc71 not found: ID does not exist"
	Aug 31 22:19:16 addons-132210 kubelet[1197]: I0831 22:19:16.674592    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78" path="/var/lib/kubelet/pods/1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78/volumes"
	Aug 31 22:19:16 addons-132210 kubelet[1197]: I0831 22:19:16.675030    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49867dc1-8d92-48f0-8c8b-50a65936ad12" path="/var/lib/kubelet/pods/49867dc1-8d92-48f0-8c8b-50a65936ad12/volumes"
	
	
	==> storage-provisioner [0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e] <==
	I0831 22:07:33.356182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:07:33.426579       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:07:33.426654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:07:33.847351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:07:33.848726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5!
	I0831 22:07:33.850075       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8a1f8e-16e7-4a54-81fb-1116caaffa55", APIVersion:"v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5 became leader
	I0831 22:07:33.951304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-132210 -n addons-132210
helpers_test.go:262: (dbg) Run:  kubectl --context addons-132210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox nginx ingress-nginx-admission-create-lffjf ingress-nginx-admission-patch-5wr2c
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-132210 describe pod busybox nginx ingress-nginx-admission-create-lffjf ingress-nginx-admission-patch-5wr2c
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context addons-132210 describe pod busybox nginx ingress-nginx-admission-create-lffjf ingress-nginx-admission-patch-5wr2c: exit status 1 (85.026246ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-132210/192.168.39.12
	Start Time:       Sat, 31 Aug 2024 22:09:59 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzs9l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wzs9l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-132210
	  Normal   Pulling    7m53s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m53s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m53s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m26s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-132210/192.168.39.12
	Start Time:       Sat, 31 Aug 2024 22:19:10 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tdbk6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tdbk6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/nginx to addons-132210
	  Normal  Pulling    4s    kubelet            Pulling image "docker.io/nginx:alpine"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/nginx:alpine" in 4.077s (4.077s including waiting). Image size: 44668625 bytes.
	  Normal  Created    0s    kubelet            Created container nginx

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lffjf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5wr2c" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context addons-132210 describe pod busybox nginx ingress-nginx-admission-create-lffjf ingress-nginx-admission-patch-5wr2c: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (157.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-132210 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-132210 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-132210 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:345: "nginx" [7c9d33c8-b37d-4376-9ade-e9dcf4168c22] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx" [7c9d33c8-b37d-4376-9ade-e9dcf4168c22] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004074298s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-132210 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.443333055s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-132210 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.12
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 addons disable ingress-dns --alsologtostderr -v=1: (1.106604936s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 addons disable ingress --alsologtostderr -v=1: (7.683320599s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-132210 -n addons-132210
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 logs -n 25: (1.240690393s)
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-777221                                                                     | download-only-777221 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-160287                                                                     | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-777221                                                                     | download-only-777221 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-465268 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | binary-mirror-465268                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45273                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-465268                                                                     | binary-mirror-465268 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-132210 --wait=true                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-132210 ssh cat                                                                       | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | /opt/local-path-provisioner/pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-132210 addons                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-132210 addons                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | -p addons-132210                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-132210 ip                                                                            | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | -p addons-132210                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-132210 ssh curl -s                                                                   | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-132210 ip                                                                            | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:21 UTC | 31 Aug 24 22:21 UTC |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:21 UTC | 31 Aug 24 22:21 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:21 UTC | 31 Aug 24 22:21 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:37.544876   21098 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:37.545155   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:37.545165   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:37.545172   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:37.545383   21098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:06:37.545946   21098 out.go:352] Setting JSON to false
	I0831 22:06:37.546798   21098 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2945,"bootTime":1725139053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:06:37.546859   21098 start.go:139] virtualization: kvm guest
	I0831 22:06:37.548701   21098 out.go:177] * [addons-132210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:06:37.550111   21098 notify.go:220] Checking for updates...
	I0831 22:06:37.550129   21098 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:06:37.551500   21098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:37.552938   21098 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:06:37.554280   21098 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:37.555749   21098 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:06:37.557091   21098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:06:37.558401   21098 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:37.589360   21098 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 22:06:37.590841   21098 start.go:297] selected driver: kvm2
	I0831 22:06:37.590856   21098 start.go:901] validating driver "kvm2" against <nil>
	I0831 22:06:37.590868   21098 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:06:37.591824   21098 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:37.591929   21098 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:06:37.606642   21098 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:06:37.606704   21098 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:37.606922   21098 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:06:37.606953   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:06:37.606960   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:06:37.606967   21098 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:37.607020   21098 start.go:340] cluster config:
	{Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:37.607103   21098 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:37.608999   21098 out.go:177] * Starting "addons-132210" primary control-plane node in "addons-132210" cluster
	I0831 22:06:37.610406   21098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:06:37.610441   21098 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:06:37.610451   21098 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:37.610537   21098 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:06:37.610551   21098 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:06:37.610893   21098 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json ...
	I0831 22:06:37.610917   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json: {Name:mk700584d59ad42df80709b4fc4c500ed7306a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:37.611077   21098 start.go:360] acquireMachinesLock for addons-132210: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:06:37.611133   21098 start.go:364] duration metric: took 40.383µs to acquireMachinesLock for "addons-132210"
	I0831 22:06:37.611156   21098 start.go:93] Provisioning new machine with config: &{Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:06:37.611223   21098 start.go:125] createHost starting for "" (driver="kvm2")
	I0831 22:06:37.613166   21098 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0831 22:06:37.613301   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:06:37.613345   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:06:37.627241   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0831 22:06:37.627637   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:06:37.628132   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:06:37.628166   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:06:37.628421   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:06:37.628636   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:06:37.628770   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:06:37.628882   21098 start.go:159] libmachine.API.Create for "addons-132210" (driver="kvm2")
	I0831 22:06:37.628903   21098 client.go:168] LocalClient.Create starting
	I0831 22:06:37.628944   21098 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:06:37.824136   21098 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:06:38.014796   21098 main.go:141] libmachine: Running pre-create checks...
	I0831 22:06:38.014823   21098 main.go:141] libmachine: (addons-132210) Calling .PreCreateCheck
	I0831 22:06:38.015353   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:06:38.015789   21098 main.go:141] libmachine: Creating machine...
	I0831 22:06:38.015803   21098 main.go:141] libmachine: (addons-132210) Calling .Create
	I0831 22:06:38.015942   21098 main.go:141] libmachine: (addons-132210) Creating KVM machine...
	I0831 22:06:38.017102   21098 main.go:141] libmachine: (addons-132210) DBG | found existing default KVM network
	I0831 22:06:38.017881   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.017718   21120 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0831 22:06:38.017904   21098 main.go:141] libmachine: (addons-132210) DBG | created network xml: 
	I0831 22:06:38.017916   21098 main.go:141] libmachine: (addons-132210) DBG | <network>
	I0831 22:06:38.017928   21098 main.go:141] libmachine: (addons-132210) DBG |   <name>mk-addons-132210</name>
	I0831 22:06:38.017940   21098 main.go:141] libmachine: (addons-132210) DBG |   <dns enable='no'/>
	I0831 22:06:38.017950   21098 main.go:141] libmachine: (addons-132210) DBG |   
	I0831 22:06:38.017970   21098 main.go:141] libmachine: (addons-132210) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0831 22:06:38.017978   21098 main.go:141] libmachine: (addons-132210) DBG |     <dhcp>
	I0831 22:06:38.017991   21098 main.go:141] libmachine: (addons-132210) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0831 22:06:38.018001   21098 main.go:141] libmachine: (addons-132210) DBG |     </dhcp>
	I0831 22:06:38.018013   21098 main.go:141] libmachine: (addons-132210) DBG |   </ip>
	I0831 22:06:38.018023   21098 main.go:141] libmachine: (addons-132210) DBG |   
	I0831 22:06:38.018033   21098 main.go:141] libmachine: (addons-132210) DBG | </network>
	I0831 22:06:38.018046   21098 main.go:141] libmachine: (addons-132210) DBG | 
	I0831 22:06:38.023383   21098 main.go:141] libmachine: (addons-132210) DBG | trying to create private KVM network mk-addons-132210 192.168.39.0/24...
	I0831 22:06:38.089434   21098 main.go:141] libmachine: (addons-132210) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 ...
	I0831 22:06:38.089471   21098 main.go:141] libmachine: (addons-132210) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:06:38.089479   21098 main.go:141] libmachine: (addons-132210) DBG | private KVM network mk-addons-132210 192.168.39.0/24 created
	I0831 22:06:38.089493   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.089368   21120 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:38.089534   21098 main.go:141] libmachine: (addons-132210) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:06:38.337644   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.337536   21120 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa...
	I0831 22:06:38.706397   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.706261   21120 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/addons-132210.rawdisk...
	I0831 22:06:38.706425   21098 main.go:141] libmachine: (addons-132210) DBG | Writing magic tar header
	I0831 22:06:38.706435   21098 main.go:141] libmachine: (addons-132210) DBG | Writing SSH key tar header
	I0831 22:06:38.706447   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.706368   21120 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 ...
	I0831 22:06:38.706460   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210
	I0831 22:06:38.706528   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 (perms=drwx------)
	I0831 22:06:38.706557   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:06:38.706570   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:06:38.706579   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:38.706596   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:06:38.706607   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:06:38.706621   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:06:38.706633   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:06:38.706649   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:06:38.706662   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:06:38.706672   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:06:38.706683   21098 main.go:141] libmachine: (addons-132210) Creating domain...
	I0831 22:06:38.706692   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home
	I0831 22:06:38.706704   21098 main.go:141] libmachine: (addons-132210) DBG | Skipping /home - not owner
	I0831 22:06:38.707726   21098 main.go:141] libmachine: (addons-132210) define libvirt domain using xml: 
	I0831 22:06:38.707749   21098 main.go:141] libmachine: (addons-132210) <domain type='kvm'>
	I0831 22:06:38.707757   21098 main.go:141] libmachine: (addons-132210)   <name>addons-132210</name>
	I0831 22:06:38.707766   21098 main.go:141] libmachine: (addons-132210)   <memory unit='MiB'>4000</memory>
	I0831 22:06:38.707792   21098 main.go:141] libmachine: (addons-132210)   <vcpu>2</vcpu>
	I0831 22:06:38.707816   21098 main.go:141] libmachine: (addons-132210)   <features>
	I0831 22:06:38.707830   21098 main.go:141] libmachine: (addons-132210)     <acpi/>
	I0831 22:06:38.707843   21098 main.go:141] libmachine: (addons-132210)     <apic/>
	I0831 22:06:38.707865   21098 main.go:141] libmachine: (addons-132210)     <pae/>
	I0831 22:06:38.707885   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.707895   21098 main.go:141] libmachine: (addons-132210)   </features>
	I0831 22:06:38.707905   21098 main.go:141] libmachine: (addons-132210)   <cpu mode='host-passthrough'>
	I0831 22:06:38.707915   21098 main.go:141] libmachine: (addons-132210)   
	I0831 22:06:38.707924   21098 main.go:141] libmachine: (addons-132210)   </cpu>
	I0831 22:06:38.707929   21098 main.go:141] libmachine: (addons-132210)   <os>
	I0831 22:06:38.707936   21098 main.go:141] libmachine: (addons-132210)     <type>hvm</type>
	I0831 22:06:38.707942   21098 main.go:141] libmachine: (addons-132210)     <boot dev='cdrom'/>
	I0831 22:06:38.707948   21098 main.go:141] libmachine: (addons-132210)     <boot dev='hd'/>
	I0831 22:06:38.707954   21098 main.go:141] libmachine: (addons-132210)     <bootmenu enable='no'/>
	I0831 22:06:38.707960   21098 main.go:141] libmachine: (addons-132210)   </os>
	I0831 22:06:38.707966   21098 main.go:141] libmachine: (addons-132210)   <devices>
	I0831 22:06:38.707975   21098 main.go:141] libmachine: (addons-132210)     <disk type='file' device='cdrom'>
	I0831 22:06:38.708007   21098 main.go:141] libmachine: (addons-132210)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/boot2docker.iso'/>
	I0831 22:06:38.708027   21098 main.go:141] libmachine: (addons-132210)       <target dev='hdc' bus='scsi'/>
	I0831 22:06:38.708034   21098 main.go:141] libmachine: (addons-132210)       <readonly/>
	I0831 22:06:38.708039   21098 main.go:141] libmachine: (addons-132210)     </disk>
	I0831 22:06:38.708051   21098 main.go:141] libmachine: (addons-132210)     <disk type='file' device='disk'>
	I0831 22:06:38.708065   21098 main.go:141] libmachine: (addons-132210)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:06:38.708082   21098 main.go:141] libmachine: (addons-132210)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/addons-132210.rawdisk'/>
	I0831 22:06:38.708092   21098 main.go:141] libmachine: (addons-132210)       <target dev='hda' bus='virtio'/>
	I0831 22:06:38.708106   21098 main.go:141] libmachine: (addons-132210)     </disk>
	I0831 22:06:38.708123   21098 main.go:141] libmachine: (addons-132210)     <interface type='network'>
	I0831 22:06:38.708137   21098 main.go:141] libmachine: (addons-132210)       <source network='mk-addons-132210'/>
	I0831 22:06:38.708149   21098 main.go:141] libmachine: (addons-132210)       <model type='virtio'/>
	I0831 22:06:38.708162   21098 main.go:141] libmachine: (addons-132210)     </interface>
	I0831 22:06:38.708173   21098 main.go:141] libmachine: (addons-132210)     <interface type='network'>
	I0831 22:06:38.708181   21098 main.go:141] libmachine: (addons-132210)       <source network='default'/>
	I0831 22:06:38.708190   21098 main.go:141] libmachine: (addons-132210)       <model type='virtio'/>
	I0831 22:06:38.708213   21098 main.go:141] libmachine: (addons-132210)     </interface>
	I0831 22:06:38.708228   21098 main.go:141] libmachine: (addons-132210)     <serial type='pty'>
	I0831 22:06:38.708239   21098 main.go:141] libmachine: (addons-132210)       <target port='0'/>
	I0831 22:06:38.708252   21098 main.go:141] libmachine: (addons-132210)     </serial>
	I0831 22:06:38.708262   21098 main.go:141] libmachine: (addons-132210)     <console type='pty'>
	I0831 22:06:38.708276   21098 main.go:141] libmachine: (addons-132210)       <target type='serial' port='0'/>
	I0831 22:06:38.708292   21098 main.go:141] libmachine: (addons-132210)     </console>
	I0831 22:06:38.708304   21098 main.go:141] libmachine: (addons-132210)     <rng model='virtio'>
	I0831 22:06:38.708316   21098 main.go:141] libmachine: (addons-132210)       <backend model='random'>/dev/random</backend>
	I0831 22:06:38.708328   21098 main.go:141] libmachine: (addons-132210)     </rng>
	I0831 22:06:38.708338   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.708349   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.708362   21098 main.go:141] libmachine: (addons-132210)   </devices>
	I0831 22:06:38.708377   21098 main.go:141] libmachine: (addons-132210) </domain>
	I0831 22:06:38.708386   21098 main.go:141] libmachine: (addons-132210) 
	I0831 22:06:38.714749   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:04:9d:ea in network default
	I0831 22:06:38.715229   21098 main.go:141] libmachine: (addons-132210) Ensuring networks are active...
	I0831 22:06:38.715251   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:38.715857   21098 main.go:141] libmachine: (addons-132210) Ensuring network default is active
	I0831 22:06:38.716174   21098 main.go:141] libmachine: (addons-132210) Ensuring network mk-addons-132210 is active
	I0831 22:06:38.716662   21098 main.go:141] libmachine: (addons-132210) Getting domain xml...
	I0831 22:06:38.717336   21098 main.go:141] libmachine: (addons-132210) Creating domain...
	I0831 22:06:40.114794   21098 main.go:141] libmachine: (addons-132210) Waiting to get IP...
	I0831 22:06:40.115527   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.115799   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.115829   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.115776   21120 retry.go:31] will retry after 204.646064ms: waiting for machine to come up
	I0831 22:06:40.322141   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.322530   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.322561   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.322474   21120 retry.go:31] will retry after 367.388706ms: waiting for machine to come up
	I0831 22:06:40.691020   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.691359   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.691385   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.691306   21120 retry.go:31] will retry after 449.926201ms: waiting for machine to come up
	I0831 22:06:41.142806   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:41.143371   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:41.143398   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:41.143199   21120 retry.go:31] will retry after 411.198107ms: waiting for machine to come up
	I0831 22:06:41.555507   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:41.556022   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:41.556044   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:41.555945   21120 retry.go:31] will retry after 684.989531ms: waiting for machine to come up
	I0831 22:06:42.242958   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:42.243440   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:42.243461   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:42.243416   21120 retry.go:31] will retry after 922.263131ms: waiting for machine to come up
	I0831 22:06:43.167145   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:43.167604   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:43.167629   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:43.167554   21120 retry.go:31] will retry after 879.584878ms: waiting for machine to come up
	I0831 22:06:44.048638   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:44.048976   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:44.048997   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:44.048933   21120 retry.go:31] will retry after 1.427746455s: waiting for machine to come up
	I0831 22:06:45.478039   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:45.478640   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:45.478666   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:45.478603   21120 retry.go:31] will retry after 1.190362049s: waiting for machine to come up
	I0831 22:06:46.671043   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:46.671501   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:46.671530   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:46.671448   21120 retry.go:31] will retry after 2.196766808s: waiting for machine to come up
	I0831 22:06:48.869585   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:48.870037   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:48.870059   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:48.869999   21120 retry.go:31] will retry after 2.216870251s: waiting for machine to come up
	I0831 22:06:51.089344   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:51.089783   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:51.089804   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:51.089726   21120 retry.go:31] will retry after 3.489292564s: waiting for machine to come up
	I0831 22:06:54.581936   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:54.582398   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:54.582426   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:54.582313   21120 retry.go:31] will retry after 2.860598857s: waiting for machine to come up
	I0831 22:06:57.446192   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:57.446589   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:57.446614   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:57.446501   21120 retry.go:31] will retry after 4.269318205s: waiting for machine to come up
	I0831 22:07:01.720788   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.721275   21098 main.go:141] libmachine: (addons-132210) Found IP for machine: 192.168.39.12
	I0831 22:07:01.721302   21098 main.go:141] libmachine: (addons-132210) Reserving static IP address...
	I0831 22:07:01.721320   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has current primary IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.721673   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find host DHCP lease matching {name: "addons-132210", mac: "52:54:00:35:a4:57", ip: "192.168.39.12"} in network mk-addons-132210
	I0831 22:07:01.793692   21098 main.go:141] libmachine: (addons-132210) DBG | Getting to WaitForSSH function...
	I0831 22:07:01.793719   21098 main.go:141] libmachine: (addons-132210) Reserved static IP address: 192.168.39.12
	I0831 22:07:01.793733   21098 main.go:141] libmachine: (addons-132210) Waiting for SSH to be available...
	I0831 22:07:01.796008   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.796380   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:01.796413   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.796552   21098 main.go:141] libmachine: (addons-132210) DBG | Using SSH client type: external
	I0831 22:07:01.796581   21098 main.go:141] libmachine: (addons-132210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa (-rw-------)
	I0831 22:07:01.796618   21098 main.go:141] libmachine: (addons-132210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:07:01.796631   21098 main.go:141] libmachine: (addons-132210) DBG | About to run SSH command:
	I0831 22:07:01.796665   21098 main.go:141] libmachine: (addons-132210) DBG | exit 0
	I0831 22:07:01.927398   21098 main.go:141] libmachine: (addons-132210) DBG | SSH cmd err, output: <nil>: 
	I0831 22:07:01.927709   21098 main.go:141] libmachine: (addons-132210) KVM machine creation complete!
	I0831 22:07:01.928053   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:07:01.928588   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:01.928805   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:01.928982   21098 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:07:01.928996   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:01.930232   21098 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:07:01.930250   21098 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:07:01.930278   21098 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:07:01.930291   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:01.932160   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.932434   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:01.932466   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.932569   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:01.932748   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:01.932899   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:01.933022   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:01.933173   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:01.933359   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:01.933371   21098 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:07:02.030631   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:07:02.030654   21098 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:07:02.030661   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.033292   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.033728   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.033761   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.033978   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.034178   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.034350   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.034509   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.034664   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.034840   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.034854   21098 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:07:02.136244   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:07:02.136350   21098 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:07:02.136362   21098 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:07:02.136370   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.136633   21098 buildroot.go:166] provisioning hostname "addons-132210"
	I0831 22:07:02.136653   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.136838   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.139916   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.140414   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.140447   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.140679   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.140892   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.141063   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.141293   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.141484   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.141657   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.141672   21098 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-132210 && echo "addons-132210" | sudo tee /etc/hostname
	I0831 22:07:02.253631   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-132210
	
	I0831 22:07:02.253688   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.256261   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.256636   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.256662   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.256793   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.256965   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.257118   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.257266   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.257410   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.257558   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.257579   21098 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-132210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-132210/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-132210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:07:02.369069   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:07:02.369101   21098 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:07:02.369138   21098 buildroot.go:174] setting up certificates
	I0831 22:07:02.369148   21098 provision.go:84] configureAuth start
	I0831 22:07:02.369159   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.369509   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:02.372462   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.372743   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.372769   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.372894   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.375363   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.375809   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.375831   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.376027   21098 provision.go:143] copyHostCerts
	I0831 22:07:02.376110   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:07:02.376256   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:07:02.376417   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:07:02.376622   21098 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.addons-132210 san=[127.0.0.1 192.168.39.12 addons-132210 localhost minikube]
	I0831 22:07:02.529409   21098 provision.go:177] copyRemoteCerts
	I0831 22:07:02.529465   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:07:02.529485   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.531858   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.532087   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.532145   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.532288   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.532439   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.532600   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.532744   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:02.614769   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:07:02.640733   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:07:02.666643   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:07:02.692178   21098 provision.go:87] duration metric: took 323.018181ms to configureAuth
	I0831 22:07:02.692206   21098 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:07:02.692406   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:02.692494   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.695406   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.695687   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.695718   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.695909   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.696178   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.696371   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.696472   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.696596   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.696771   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.696792   21098 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:07:02.919512   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:07:02.919537   21098 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:07:02.919546   21098 main.go:141] libmachine: (addons-132210) Calling .GetURL
	I0831 22:07:02.920835   21098 main.go:141] libmachine: (addons-132210) DBG | Using libvirt version 6000000
	I0831 22:07:02.923016   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.923361   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.923391   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.923525   21098 main.go:141] libmachine: Docker is up and running!
	I0831 22:07:02.923543   21098 main.go:141] libmachine: Reticulating splines...
	I0831 22:07:02.923552   21098 client.go:171] duration metric: took 25.29463901s to LocalClient.Create
	I0831 22:07:02.923574   21098 start.go:167] duration metric: took 25.294693611s to libmachine.API.Create "addons-132210"
	I0831 22:07:02.923584   21098 start.go:293] postStartSetup for "addons-132210" (driver="kvm2")
	I0831 22:07:02.923593   21098 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:07:02.923609   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:02.923852   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:07:02.923871   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.925703   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.926011   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.926030   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.926155   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.926317   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.926442   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.926556   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.006717   21098 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:07:03.011232   21098 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:07:03.011262   21098 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:07:03.011362   21098 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:07:03.011394   21098 start.go:296] duration metric: took 87.804145ms for postStartSetup
	I0831 22:07:03.011427   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:07:03.012028   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:03.014629   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.014960   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.014988   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.015270   21098 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json ...
	I0831 22:07:03.015499   21098 start.go:128] duration metric: took 25.404265309s to createHost
	I0831 22:07:03.015523   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.017928   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.018268   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.018291   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.018500   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.018686   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.018822   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.018966   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.019111   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:03.019276   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:03.019286   21098 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:07:03.120128   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725142023.097010301
	
	I0831 22:07:03.120147   21098 fix.go:216] guest clock: 1725142023.097010301
	I0831 22:07:03.120190   21098 fix.go:229] Guest: 2024-08-31 22:07:03.097010301 +0000 UTC Remote: 2024-08-31 22:07:03.015511488 +0000 UTC m=+25.502821103 (delta=81.498813ms)
	I0831 22:07:03.120212   21098 fix.go:200] guest clock delta is within tolerance: 81.498813ms
	I0831 22:07:03.120217   21098 start.go:83] releasing machines lock for "addons-132210", held for 25.509073174s
	I0831 22:07:03.120236   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.120504   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:03.123087   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.123415   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.123439   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.123594   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124139   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124328   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124419   21098 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:07:03.124455   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.124550   21098 ssh_runner.go:195] Run: cat /version.json
	I0831 22:07:03.124566   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.127123   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127348   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127456   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.127478   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127620   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.127797   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.127815   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127860   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.127949   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.128037   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.128111   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.128172   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.128232   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.128351   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.200298   21098 ssh_runner.go:195] Run: systemctl --version
	I0831 22:07:03.227274   21098 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:07:03.385642   21098 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:07:03.391833   21098 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:07:03.391895   21098 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:07:03.410079   21098 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:07:03.410103   21098 start.go:495] detecting cgroup driver to use...
	I0831 22:07:03.410164   21098 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:07:03.427440   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:07:03.442818   21098 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:07:03.442873   21098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:07:03.457961   21098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:07:03.472688   21098 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:07:03.587297   21098 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:07:03.750451   21098 docker.go:233] disabling docker service ...
	I0831 22:07:03.750529   21098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:07:03.765720   21098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:07:03.779301   21098 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:07:03.904389   21098 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:07:04.017402   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:07:04.032166   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:07:04.050757   21098 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:07:04.050832   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.061287   21098 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:07:04.061357   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.071771   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.082266   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.092904   21098 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:07:04.103797   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.114937   21098 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.132389   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.142812   21098 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:07:04.152012   21098 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:07:04.152067   21098 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:07:04.165405   21098 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:07:04.174718   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:04.283822   21098 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:07:04.383793   21098 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:07:04.383893   21098 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:07:04.388685   21098 start.go:563] Will wait 60s for crictl version
	I0831 22:07:04.388753   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:07:04.392620   21098 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:07:04.444477   21098 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:07:04.444598   21098 ssh_runner.go:195] Run: crio --version
	I0831 22:07:04.473736   21098 ssh_runner.go:195] Run: crio --version
	I0831 22:07:04.503698   21098 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:07:04.505075   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:04.507671   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:04.508005   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:04.508029   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:04.508213   21098 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:07:04.512325   21098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:07:04.525355   21098 kubeadm.go:883] updating cluster {Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:07:04.525461   21098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:07:04.525500   21098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:07:04.558664   21098 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0831 22:07:04.558743   21098 ssh_runner.go:195] Run: which lz4
	I0831 22:07:04.562947   21098 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 22:07:04.567112   21098 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 22:07:04.567139   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0831 22:07:05.903076   21098 crio.go:462] duration metric: took 1.340167325s to copy over tarball
	I0831 22:07:05.903140   21098 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 22:07:08.148415   21098 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.245250117s)
	I0831 22:07:08.148446   21098 crio.go:469] duration metric: took 2.245343942s to extract the tarball
	I0831 22:07:08.148455   21098 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 22:07:08.185382   21098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:07:08.228652   21098 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:07:08.228676   21098 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:07:08.228684   21098 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.31.0 crio true true} ...
	I0831 22:07:08.228785   21098 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-132210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:07:08.228868   21098 ssh_runner.go:195] Run: crio config
	I0831 22:07:08.272478   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:07:08.272508   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:07:08.272527   21098 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:07:08.272550   21098 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-132210 NodeName:addons-132210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:07:08.272727   21098 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-132210"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:07:08.272797   21098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:07:08.282654   21098 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:07:08.282722   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:07:08.292061   21098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:07:08.308679   21098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:07:08.324837   21098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0831 22:07:08.341642   21098 ssh_runner.go:195] Run: grep 192.168.39.12	control-plane.minikube.internal$ /etc/hosts
	I0831 22:07:08.345567   21098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:07:08.357961   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:08.466928   21098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:08.482753   21098 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210 for IP: 192.168.39.12
	I0831 22:07:08.482776   21098 certs.go:194] generating shared ca certs ...
	I0831 22:07:08.482790   21098 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.482937   21098 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:07:08.597311   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt ...
	I0831 22:07:08.597339   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt: {Name:mkfc4c408c230132bbe7fe213eeea10a6827c0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.597509   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key ...
	I0831 22:07:08.597520   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key: {Name:mkd43af6d176eb1599961c21c4cf9cd0b89179f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.597585   21098 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:07:08.724372   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt ...
	I0831 22:07:08.724403   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt: {Name:mk9535d600107772240a5a04a39fba46922be0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.724563   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key ...
	I0831 22:07:08.724574   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key: {Name:mkde040c84f81ae9d500962d5b2c7d3a71ca66c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.724640   21098 certs.go:256] generating profile certs ...
	I0831 22:07:08.724688   21098 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key
	I0831 22:07:08.724702   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt with IP's: []
	I0831 22:07:08.875287   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt ...
	I0831 22:07:08.875314   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: {Name:mk5db0031ee87d851d15425d75d7b2faf9a2a074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.875490   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key ...
	I0831 22:07:08.875501   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key: {Name:mk19417e85915a2da4d854ab40b604380b362ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.875569   21098 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573
	I0831 22:07:08.875586   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12]
	I0831 22:07:08.931384   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 ...
	I0831 22:07:08.931413   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573: {Name:mk348633e181ba1f2f701144ddd9247b046d96ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.931554   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573 ...
	I0831 22:07:08.931567   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573: {Name:mk786aa380be6f62aca47aa829b55a6abecc88d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.931632   21098 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt
	I0831 22:07:08.931712   21098 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key
	I0831 22:07:08.931760   21098 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key
	I0831 22:07:08.931777   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt with IP's: []
	I0831 22:07:08.977840   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt ...
	I0831 22:07:08.977870   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt: {Name:mk26c70606574ad0633e48cf1995428b32594850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.978036   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key ...
	I0831 22:07:08.978047   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key: {Name:mk7a0020fb4b16382f09b75c285c938b4e52843a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.978220   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:07:08.978258   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:07:08.978282   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:07:08.978303   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:07:08.978949   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:07:09.004455   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:07:09.029604   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:07:09.053313   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:07:09.077554   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:07:09.102196   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:07:09.127069   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:07:09.153769   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:07:09.180539   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:07:09.206167   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:07:09.224663   21098 ssh_runner.go:195] Run: openssl version
	I0831 22:07:09.230496   21098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:07:09.241375   21098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.246377   21098 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.246454   21098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.252587   21098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:07:09.263592   21098 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:07:09.267795   21098 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:07:09.267846   21098 kubeadm.go:392] StartCluster: {Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:07:09.267917   21098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:07:09.267965   21098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:07:09.309105   21098 cri.go:89] found id: ""
	I0831 22:07:09.309176   21098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:07:09.319285   21098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:07:09.333293   21098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:07:09.348394   21098 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:07:09.348414   21098 kubeadm.go:157] found existing configuration files:
	
	I0831 22:07:09.348466   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:07:09.358972   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:07:09.359049   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:07:09.370609   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:07:09.382278   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:07:09.382347   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:07:09.393363   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:07:09.403425   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:07:09.403501   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:07:09.414483   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:07:09.425120   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:07:09.425188   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:07:09.436044   21098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 22:07:09.489573   21098 kubeadm.go:310] W0831 22:07:09.473217     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:07:09.490547   21098 kubeadm.go:310] W0831 22:07:09.474222     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:07:09.600273   21098 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:07:19.334217   21098 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:07:19.334291   21098 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:07:19.334389   21098 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:07:19.334542   21098 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:07:19.334652   21098 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:07:19.334708   21098 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:07:19.336431   21098 out.go:235]   - Generating certificates and keys ...
	I0831 22:07:19.336518   21098 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:07:19.336608   21098 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:07:19.336691   21098 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:07:19.336759   21098 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:07:19.336849   21098 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:07:19.336925   21098 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:07:19.337003   21098 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:07:19.337137   21098 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-132210 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0831 22:07:19.337224   21098 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:07:19.337376   21098 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-132210 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0831 22:07:19.337459   21098 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:07:19.337525   21098 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:07:19.337585   21098 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:07:19.337668   21098 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:07:19.337742   21098 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:07:19.337831   21098 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:07:19.337921   21098 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:07:19.338006   21098 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:07:19.338077   21098 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:07:19.338185   21098 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:07:19.338278   21098 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:07:19.340682   21098 out.go:235]   - Booting up control plane ...
	I0831 22:07:19.340798   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:07:19.340931   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:07:19.341031   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:07:19.341176   21098 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:07:19.341298   21098 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:07:19.341358   21098 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:07:19.341525   21098 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:07:19.341674   21098 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:07:19.341768   21098 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001861689s
	I0831 22:07:19.341842   21098 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:07:19.341928   21098 kubeadm.go:310] [api-check] The API server is healthy after 5.002243064s
	I0831 22:07:19.342094   21098 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:07:19.342281   21098 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:07:19.342371   21098 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:07:19.342560   21098 kubeadm.go:310] [mark-control-plane] Marking the node addons-132210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:07:19.342651   21098 kubeadm.go:310] [bootstrap-token] Using token: tds7o0.8p21t51ubuabfjmq
	I0831 22:07:19.344005   21098 out.go:235]   - Configuring RBAC rules ...
	I0831 22:07:19.344099   21098 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:07:19.344192   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:07:19.344360   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:07:19.344510   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:07:19.344781   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:07:19.344861   21098 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:07:19.344973   21098 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:07:19.345017   21098 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:07:19.345057   21098 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:07:19.345063   21098 kubeadm.go:310] 
	I0831 22:07:19.345111   21098 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:07:19.345117   21098 kubeadm.go:310] 
	I0831 22:07:19.345211   21098 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:07:19.345219   21098 kubeadm.go:310] 
	I0831 22:07:19.345240   21098 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:07:19.345289   21098 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:07:19.345334   21098 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:07:19.345340   21098 kubeadm.go:310] 
	I0831 22:07:19.345393   21098 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:07:19.345401   21098 kubeadm.go:310] 
	I0831 22:07:19.345443   21098 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:07:19.345452   21098 kubeadm.go:310] 
	I0831 22:07:19.345503   21098 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:07:19.345607   21098 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:07:19.345685   21098 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:07:19.345695   21098 kubeadm.go:310] 
	I0831 22:07:19.345816   21098 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:07:19.345897   21098 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:07:19.345903   21098 kubeadm.go:310] 
	I0831 22:07:19.345969   21098 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tds7o0.8p21t51ubuabfjmq \
	I0831 22:07:19.346062   21098 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e \
	I0831 22:07:19.346084   21098 kubeadm.go:310] 	--control-plane 
	I0831 22:07:19.346090   21098 kubeadm.go:310] 
	I0831 22:07:19.346184   21098 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:07:19.346195   21098 kubeadm.go:310] 
	I0831 22:07:19.346266   21098 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tds7o0.8p21t51ubuabfjmq \
	I0831 22:07:19.346370   21098 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e 
	I0831 22:07:19.346389   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:07:19.346398   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:07:19.347902   21098 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 22:07:19.348984   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 22:07:19.359846   21098 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 22:07:19.378926   21098 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:07:19.378983   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:19.379028   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-132210 minikube.k8s.io/updated_at=2024_08_31T22_07_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-132210 minikube.k8s.io/primary=true
	I0831 22:07:19.505912   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:19.528337   21098 ops.go:34] apiserver oom_adj: -16
	I0831 22:07:20.006130   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:20.506049   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:21.006229   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:21.506568   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:22.006961   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:22.506496   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.006336   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.506858   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.585460   21098 kubeadm.go:1113] duration metric: took 4.206527831s to wait for elevateKubeSystemPrivileges
	I0831 22:07:23.585486   21098 kubeadm.go:394] duration metric: took 14.317645494s to StartCluster
	I0831 22:07:23.585502   21098 settings.go:142] acquiring lock: {Name:mkec6b4f5d3301688503002977bc4d63aab7adcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:23.585612   21098 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:07:23.585914   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/kubeconfig: {Name:mkc6d6b60cc62b336d228fe4b49e098aa4d94f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:23.586102   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:07:23.586108   21098 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:07:23.586191   21098 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:07:23.586284   21098 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-132210"
	I0831 22:07:23.586294   21098 addons.go:69] Setting default-storageclass=true in profile "addons-132210"
	I0831 22:07:23.586299   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:23.586295   21098 addons.go:69] Setting cloud-spanner=true in profile "addons-132210"
	I0831 22:07:23.586317   21098 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-132210"
	I0831 22:07:23.586338   21098 addons.go:234] Setting addon cloud-spanner=true in "addons-132210"
	I0831 22:07:23.586334   21098 addons.go:69] Setting metrics-server=true in profile "addons-132210"
	I0831 22:07:23.586358   21098 addons.go:69] Setting inspektor-gadget=true in profile "addons-132210"
	I0831 22:07:23.586370   21098 addons.go:69] Setting helm-tiller=true in profile "addons-132210"
	I0831 22:07:23.586379   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586382   21098 addons.go:234] Setting addon inspektor-gadget=true in "addons-132210"
	I0831 22:07:23.586383   21098 addons.go:69] Setting storage-provisioner=true in profile "addons-132210"
	I0831 22:07:23.586392   21098 addons.go:234] Setting addon helm-tiller=true in "addons-132210"
	I0831 22:07:23.586403   21098 addons.go:234] Setting addon storage-provisioner=true in "addons-132210"
	I0831 22:07:23.586413   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586423   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586433   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586686   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586728   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586770   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586804   21098 addons.go:69] Setting registry=true in profile "addons-132210"
	I0831 22:07:23.586813   21098 addons.go:69] Setting volumesnapshots=true in profile "addons-132210"
	I0831 22:07:23.586825   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586832   21098 addons.go:234] Setting addon registry=true in "addons-132210"
	I0831 22:07:23.586844   21098 addons.go:234] Setting addon volumesnapshots=true in "addons-132210"
	I0831 22:07:23.586855   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586283   21098 addons.go:69] Setting yakd=true in profile "addons-132210"
	I0831 22:07:23.586867   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586889   21098 addons.go:234] Setting addon yakd=true in "addons-132210"
	I0831 22:07:23.586916   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586807   21098 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-132210"
	I0831 22:07:23.586988   21098 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-132210"
	I0831 22:07:23.587205   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587226   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587228   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586371   21098 addons.go:234] Setting addon metrics-server=true in "addons-132210"
	I0831 22:07:23.587269   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586360   21098 addons.go:69] Setting gcp-auth=true in profile "addons-132210"
	I0831 22:07:23.587294   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587296   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.587300   21098 mustload.go:65] Loading cluster: addons-132210
	I0831 22:07:23.587308   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587341   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587377   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586345   21098 addons.go:69] Setting ingress-dns=true in profile "addons-132210"
	I0831 22:07:23.587497   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:23.586770   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587534   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587643   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587679   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587497   21098 addons.go:234] Setting addon ingress-dns=true in "addons-132210"
	I0831 22:07:23.586789   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587724   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.587760   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586794   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586854   21098 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-132210"
	I0831 22:07:23.586783   21098 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-132210"
	I0831 22:07:23.587810   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587828   21098 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-132210"
	I0831 22:07:23.586798   21098 addons.go:69] Setting volcano=true in profile "addons-132210"
	I0831 22:07:23.587854   21098 addons.go:234] Setting addon volcano=true in "addons-132210"
	I0831 22:07:23.586331   21098 addons.go:69] Setting ingress=true in profile "addons-132210"
	I0831 22:07:23.587887   21098 addons.go:234] Setting addon ingress=true in "addons-132210"
	I0831 22:07:23.588117   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.588477   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.588503   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.588555   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.588574   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.588797   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.588819   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.589146   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.589162   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.589185   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.589230   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.589278   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.595405   21098 out.go:177] * Verifying Kubernetes components...
	I0831 22:07:23.599775   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:23.607898   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0831 22:07:23.608464   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
	I0831 22:07:23.608573   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0831 22:07:23.609061   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609163   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609490   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609665   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.609681   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.609938   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.609953   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.610031   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610054   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.610072   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.610147   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0831 22:07:23.610474   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610549   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0831 22:07:23.610740   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.610794   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610831   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.611018   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.611156   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.611170   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.611286   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.611299   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.611477   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.611618   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.611699   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.615775   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.615947   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.615974   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.616335   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.616370   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.621980   21098 addons.go:234] Setting addon default-storageclass=true in "addons-132210"
	I0831 22:07:23.622070   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.622457   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.622516   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.623860   21098 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-132210"
	I0831 22:07:23.623897   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.624221   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.624251   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.631854   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.615777   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.632193   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.615777   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.632797   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.632822   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0831 22:07:23.639452   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
	I0831 22:07:23.639483   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0831 22:07:23.640021   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.640140   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.640612   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.640631   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.640965   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.641062   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.641077   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.641147   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.641480   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.642095   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.642132   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.644079   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0831 22:07:23.644378   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.644778   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.644853   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.644867   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.644876   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.645175   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.645259   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.645287   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.645335   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I0831 22:07:23.645668   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.645683   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.645700   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.645993   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.646012   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.646152   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.646163   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.647040   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.647260   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.647648   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.647673   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.648054   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.649653   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.651862   21098 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:07:23.653359   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0831 22:07:23.653404   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:07:23.653419   21098 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:07:23.653443   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.653793   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.656591   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.657110   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.657148   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.657255   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.657289   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.657300   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.657746   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.657824   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.657895   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0831 22:07:23.657957   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.658358   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.658386   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.658390   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.658533   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.659277   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.659302   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.659683   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.659864   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.661487   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.663195   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:07:23.663288   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0831 22:07:23.663682   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.664270   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.664292   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.664416   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:07:23.664440   21098 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:07:23.664462   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.664598   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.665099   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.665137   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.668127   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.668154   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.668185   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.668378   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.668565   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.668732   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.668882   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.669430   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0831 22:07:23.669703   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.670101   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.670117   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.670405   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.671393   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.671430   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.672401   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0831 22:07:23.672405   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0831 22:07:23.672825   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.672904   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.673447   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.673475   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.673794   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.674020   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.674041   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.674092   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.674985   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.675528   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.675566   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.676624   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.678884   21098 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:07:23.680300   21098 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:23.680318   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:07:23.680341   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.681210   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0831 22:07:23.683715   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.683816   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0831 22:07:23.684416   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.684430   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.684488   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.684506   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.684593   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.684729   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.684885   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.684908   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.685078   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.685876   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.686073   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.686679   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0831 22:07:23.687155   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.687443   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.687626   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.687903   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.687917   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.688614   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.688628   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.688964   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.689489   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.689521   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.689640   21098 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:07:23.690115   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.690674   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.690712   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.690910   21098 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:23.690929   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:07:23.690949   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.693797   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.694203   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.694226   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.694378   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.694536   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.694652   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.694748   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.695907   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0831 22:07:23.696312   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.696776   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.696797   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.697094   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.697267   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.704189   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.704458   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0831 22:07:23.704894   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.705446   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.705465   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.705571   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0831 22:07:23.705976   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.706019   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:23.706276   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.706426   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.706438   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.706789   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.707335   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.707376   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.707662   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.708390   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0831 22:07:23.708421   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:23.708877   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.709389   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.709405   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.709467   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.709506   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44487
	I0831 22:07:23.709999   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.710056   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0831 22:07:23.710157   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0831 22:07:23.710455   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.710596   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.710831   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.710851   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.710876   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.710886   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:07:23.710934   21098 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:07:23.711123   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.711251   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.711467   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.711486   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.711519   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.712107   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.712202   21098 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:23.712222   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:07:23.712241   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.712501   21098 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:23.712517   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:07:23.712531   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.712669   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.712683   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.712710   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.712727   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37725
	I0831 22:07:23.712748   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.713405   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.713788   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.713855   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.714889   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.714908   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.715016   21098 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:07:23.715152   21098 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0831 22:07:23.715575   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.715816   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.716851   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.717255   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0831 22:07:23.717351   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.717594   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0831 22:07:23.717606   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0831 22:07:23.717622   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.718309   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.718412   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0831 22:07:23.718545   21098 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:07:23.718731   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.719156   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.719170   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.719236   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719258   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719522   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.719872   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.719904   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719936   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.719954   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.720069   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.720084   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.720095   21098 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:07:23.720107   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:07:23.720130   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.720444   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.720568   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.720598   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.720724   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.720879   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.720934   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.720979   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.721048   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.721785   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.721873   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.722229   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.723401   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.723420   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.723449   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.723458   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0831 22:07:23.723466   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.723623   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.723671   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.723988   21098 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:23.723999   21098 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:07:23.724001   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.724033   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.724011   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.724695   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.724718   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.724889   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.725405   21098 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:07:23.725476   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.725493   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.725933   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.726224   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.726494   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:07:23.726505   21098 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:07:23.726517   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.727867   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.728730   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.728793   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.729260   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.729288   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730267   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:07:23.730375   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730404   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.730417   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.730471   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.730484   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730629   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.730630   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.730777   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.730843   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.730978   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.731217   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.731708   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.731727   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.732701   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:07:23.733806   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0831 22:07:23.733914   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.734151   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.734236   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.734369   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.734573   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.734941   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.734955   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.735218   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:07:23.735423   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.735605   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.737637   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:07:23.737864   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.739670   21098 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:07:23.739673   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:07:23.740906   21098 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:23.740926   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:07:23.740944   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.742803   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:07:23.743050   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0831 22:07:23.743591   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.744134   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.744153   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.744225   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.744513   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.744683   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.744705   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.744736   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.744900   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.745356   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 22:07:23.745430   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.745580   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.745776   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.746229   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.746407   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:23.746416   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:23.746590   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:23.746598   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:23.746604   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:23.746609   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:23.746831   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:23.746844   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	W0831 22:07:23.746916   21098 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0831 22:07:23.748245   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:07:23.749403   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:07:23.749426   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:07:23.749442   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.751103   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
	I0831 22:07:23.751505   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.751960   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.751972   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.752271   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.752468   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.752488   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.752879   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.752892   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.753179   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.753384   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.753544   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.753666   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.753967   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	W0831 22:07:23.754404   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53982->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.754423   21098 retry.go:31] will retry after 201.037828ms: ssh: handshake failed: read tcp 192.168.39.1:53982->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.755597   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0831 22:07:23.755767   21098 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:07:23.755970   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.756401   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.756422   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.756792   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.756966   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.757169   21098 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:07:23.757183   21098 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:07:23.757195   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.758339   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.759819   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.760016   21098 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:07:23.760235   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.760273   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.760417   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.760619   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.760786   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.760948   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	W0831 22:07:23.761568   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53984->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.761587   21098 retry.go:31] will retry after 339.775685ms: ssh: handshake failed: read tcp 192.168.39.1:53984->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.762678   21098 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:07:23.764273   21098 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:23.764290   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:07:23.764302   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.767265   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.767714   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.767737   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.768009   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.768256   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	W0831 22:07:23.768259   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53988->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.768311   21098 retry.go:31] will retry after 253.843102ms: ssh: handshake failed: read tcp 192.168.39.1:53988->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.768409   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.768516   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	W0831 22:07:23.769143   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53996->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.769159   21098 retry.go:31] will retry after 228.687708ms: ssh: handshake failed: read tcp 192.168.39.1:53996->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:24.009671   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0831 22:07:24.009698   21098 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0831 22:07:24.035122   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:07:24.035143   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:07:24.096675   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:24.137383   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:07:24.137405   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:07:24.192363   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:24.208220   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:07:24.208244   21098 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:07:24.213758   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:24.294093   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:24.337682   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:24.337708   21098 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0831 22:07:24.355787   21098 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:07:24.355811   21098 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:07:24.397120   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:07:24.397152   21098 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:07:24.399259   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:07:24.399283   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:07:24.402180   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:24.414440   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:07:24.414467   21098 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:07:24.448723   21098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:24.448889   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:07:24.517279   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:24.544228   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:07:24.544262   21098 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:07:24.582484   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:07:24.582507   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:07:24.590888   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:24.616331   21098 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:24.616362   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:07:24.621087   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:07:24.621125   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:07:24.734564   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:24.734588   21098 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:07:24.758600   21098 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:07:24.758627   21098 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:07:24.761196   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:24.842914   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:24.842933   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:07:24.864484   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:07:24.864510   21098 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:07:24.881251   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:07:24.881275   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:07:24.905038   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:24.972031   21098 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:07:24.972050   21098 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:07:25.015374   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:25.038602   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:25.055589   21098 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:25.055612   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:07:25.151602   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:07:25.151634   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:07:25.172190   21098 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:07:25.172212   21098 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:07:25.405884   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:25.444500   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:07:25.444532   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:07:25.463903   21098 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:07:25.463928   21098 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:07:25.694161   21098 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:07:25.694186   21098 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:07:25.820674   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:07:25.820702   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:07:26.073362   21098 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:07:26.073394   21098 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:07:26.236676   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:07:26.236699   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:07:26.439580   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:07:26.439601   21098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:07:26.439960   21098 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:26.439985   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:07:26.584141   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:07:26.584183   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:07:26.783005   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:26.907600   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:07:26.907633   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:07:27.113741   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.01702554s)
	I0831 22:07:27.113757   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.92136815s)
	I0831 22:07:27.113790   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.113800   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.113830   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.113849   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114071   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114123   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114136   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.114145   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114194   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114229   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114252   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114268   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.114277   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114475   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114488   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114509   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114523   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114580   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114592   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.185606   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.185631   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.185967   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.185985   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.328527   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:27.328551   21098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:07:27.420622   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:28.677844   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.383721569s)
	I0831 22:07:28.677898   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.677918   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678012   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464218982s)
	I0831 22:07:28.678051   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678062   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678125   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678139   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678148   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678155   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678124   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678363   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678382   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678392   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678399   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678411   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678423   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678427   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678445   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678604   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678634   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678641   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:30.778509   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:07:30.778553   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:30.781708   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:30.782089   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:30.782125   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:30.782277   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:30.782513   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:30.782693   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:30.782862   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:31.160940   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:07:31.262365   21098 addons.go:234] Setting addon gcp-auth=true in "addons-132210"
	I0831 22:07:31.262423   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:31.262727   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:31.262758   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:31.277512   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0831 22:07:31.277939   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:31.278419   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:31.278439   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:31.278698   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:31.279297   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:31.279351   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:31.294328   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I0831 22:07:31.294767   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:31.295196   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:31.295217   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:31.295567   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:31.295765   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:31.297275   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:31.297521   21098 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:07:31.297544   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:31.300179   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:31.300578   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:31.300608   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:31.300739   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:31.300921   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:31.301090   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:31.301236   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:32.605488   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.203266923s)
	I0831 22:07:32.605553   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.605587   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.605634   21098 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.156868741s)
	I0831 22:07:32.605738   21098 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.156819626s)
	I0831 22:07:32.605762   21098 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0831 22:07:32.606876   21098 node_ready.go:35] waiting up to 6m0s for node "addons-132210" to be "Ready" ...
	I0831 22:07:32.607056   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.089745734s)
	I0831 22:07:32.607084   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607095   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607118   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.016199589s)
	I0831 22:07:32.607152   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607164   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607211   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.845992141s)
	I0831 22:07:32.607230   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607245   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607248   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.702177169s)
	I0831 22:07:32.607264   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607279   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607359   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.591933506s)
	I0831 22:07:32.607385   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607396   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607840   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.569207893s)
	I0831 22:07:32.607890   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607912   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607980   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.607989   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608007   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608017   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608040   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.202125622s)
	W0831 22:07:32.608084   21098 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:32.608103   21098 retry.go:31] will retry after 213.169609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:32.608139   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608154   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608156   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608180   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608181   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608196   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608201   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608205   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608217   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608221   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.825175604s)
	I0831 22:07:32.608272   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608287   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608294   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608321   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608328   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608225   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608446   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608456   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608704   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608733   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608743   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608759   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608768   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.609174   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.609191   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.609201   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.609210   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608880   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.610038   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.610082   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.610099   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.610108   21098 addons.go:475] Verifying addon ingress=true in "addons-132210"
	I0831 22:07:32.610320   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.610332   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.610339   21098 addons.go:475] Verifying addon registry=true in "addons-132210"
	I0831 22:07:32.611022   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611037   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611103   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611228   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611256   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611264   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611281   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.611290   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.611294   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611320   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611347   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611356   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.611364   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.611744   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611769   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611785   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611796   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611796   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611805   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.612775   21098 out.go:177] * Verifying ingress addon...
	I0831 22:07:32.612947   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.612972   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.613355   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.613371   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.613380   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.612991   21098 out.go:177] * Verifying registry addon...
	I0831 22:07:32.613676   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.613692   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.613702   21098 addons.go:475] Verifying addon metrics-server=true in "addons-132210"
	I0831 22:07:32.613754   21098 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-132210 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:07:32.615291   21098 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:07:32.616400   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:07:32.633226   21098 node_ready.go:49] node "addons-132210" has status "Ready":"True"
	I0831 22:07:32.633254   21098 node_ready.go:38] duration metric: took 26.354748ms for node "addons-132210" to be "Ready" ...
	I0831 22:07:32.633267   21098 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:32.672510   21098 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:07:32.672535   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:32.672811   21098 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:07:32.672833   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:32.716505   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.716533   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.716849   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.716869   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.722171   21098 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.790958   21098 pod_ready.go:93] pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.790982   21098 pod_ready.go:82] duration metric: took 68.780152ms for pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.790998   21098 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.822430   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:32.843686   21098 pod_ready.go:93] pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.843710   21098 pod_ready.go:82] duration metric: took 52.705196ms for pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.843719   21098 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.894732   21098 pod_ready.go:93] pod "etcd-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.894755   21098 pod_ready.go:82] duration metric: took 51.029517ms for pod "etcd-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.894765   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.909271   21098 pod_ready.go:93] pod "kube-apiserver-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.909293   21098 pod_ready.go:82] duration metric: took 14.521596ms for pod "kube-apiserver-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.909302   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.013537   21098 pod_ready.go:93] pod "kube-controller-manager-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.013559   21098 pod_ready.go:82] duration metric: took 104.249609ms for pod "kube-controller-manager-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.013571   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pf4zb" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.127456   21098 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-132210" context rescaled to 1 replicas
	I0831 22:07:33.148736   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.257499   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.418853   21098 pod_ready.go:93] pod "kube-proxy-pf4zb" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.418877   21098 pod_ready.go:82] duration metric: took 405.299679ms for pod "kube-proxy-pf4zb" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.418890   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.854578   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.855771   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.865760   21098 pod_ready.go:93] pod "kube-scheduler-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.865782   21098 pod_ready.go:82] duration metric: took 446.884331ms for pod "kube-scheduler-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.865796   21098 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:34.148775   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.148849   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.303845   21098 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.006297628s)
	I0831 22:07:34.303848   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.883150423s)
	I0831 22:07:34.304054   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.304074   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.304425   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.304447   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.304456   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.304467   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.304698   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.304719   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.304743   21098 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-132210"
	I0831 22:07:34.304787   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.305581   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:34.306666   21098 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:07:34.308329   21098 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:07:34.309280   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:07:34.309726   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:07:34.309747   21098 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:07:34.329848   21098 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:07:34.329875   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.454442   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:07:34.454475   21098 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:07:34.518709   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:34.518732   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:07:34.575530   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:34.579667   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.757184457s)
	I0831 22:07:34.579722   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.579737   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.580030   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.580053   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.580073   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.580089   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.580102   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.580283   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.580308   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.580311   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.619308   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.620410   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.814548   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.120705   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.121027   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.313455   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.628958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.629640   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.874670   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.924472   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:35.964663   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.389094024s)
	I0831 22:07:35.964728   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:35.964747   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:35.965086   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:35.965129   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:35.965146   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:35.965161   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:35.965177   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:35.965478   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:35.965495   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:35.965500   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:35.967806   21098 addons.go:475] Verifying addon gcp-auth=true in "addons-132210"
	I0831 22:07:35.969545   21098 out.go:177] * Verifying gcp-auth addon...
	I0831 22:07:35.971896   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:07:35.999763   21098 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:07:35.999784   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:36.122605   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.123410   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.315123   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.475878   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:36.619752   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.620766   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.814203   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.975190   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:37.122336   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.122478   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.315341   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.475177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:37.620866   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.621439   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.814228   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.975613   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:38.120903   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.121229   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.314007   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.372392   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:38.475094   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:38.944270   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.944466   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.944638   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.977495   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:39.125969   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.126728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.313948   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.477476   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:39.620217   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.620445   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.814405   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.974903   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:40.121141   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.121755   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.314729   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.475251   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:40.620786   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.621250   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.814002   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.872198   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:41.005315   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:41.121910   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.122193   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.315886   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.476677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:41.621217   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.621565   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.823677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.977326   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:42.120209   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.120445   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.319015   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.476300   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:42.620896   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.621628   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.813805   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.872520   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:42.975650   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:43.119591   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.120374   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.316617   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.476126   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:43.619662   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.620425   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.815672   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.977099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:44.120689   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.120721   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.313640   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.474938   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:44.619883   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.620952   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.816734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.975512   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:45.119105   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.119826   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.313584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.380588   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:45.475926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:45.619771   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.620772   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.813745   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.975148   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.120296   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.120403   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.314008   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.475502   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.619407   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.619757   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.813669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.976377   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.121378   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.121861   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.320782   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.475797   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.620484   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.621120   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.817902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.873131   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:47.979915   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.120586   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.121010   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.314359   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.475174   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.620253   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.620967   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.813635   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.975699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.119734   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.120086   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.313782   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.475879   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.619985   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.621004   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.815468   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.873566   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:49.975581   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.120337   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.120541   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.314227   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.478135   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.622036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.622859   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.814060   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.975967   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.120306   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.121507   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:51.314547   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.475724   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.620114   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:51.620309   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.814022   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.976109   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.121801   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:52.122553   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.314307   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.372533   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:52.476431   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.619444   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.620536   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:52.814597   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.975521   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.120042   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.120210   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:53.314115   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.475728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.620177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:53.623813   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.814919   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.975959   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.120801   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.121168   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:54.315417   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.374460   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:54.476113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.619806   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.621022   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:54.815198   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.975080   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.120293   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.121322   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:55.314732   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.475687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.619856   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.620809   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:55.814765   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.975740   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.120854   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:56.121921   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.316560   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.475631   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.619589   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.620330   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:56.814597   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.872821   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:56.975866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.120787   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.120963   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:57.314895   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.476283   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.618831   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.620240   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:57.813768   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.975551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.121198   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:58.121479   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.314126   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.475209   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.620354   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.623406   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:58.817231   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.975135   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.120742   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.121902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:59.314224   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.372594   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:59.654374   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:59.654873   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.655101   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.814892   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.976412   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.121236   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:00.121952   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.314857   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.476585   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.620958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:00.621503   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.814717   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.975596   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.120556   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:01.121227   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.314332   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.373553   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:01.475855   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.620256   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:01.620695   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.817902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.976941   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.120512   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:02.120709   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.315631   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.475468   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.621509   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:02.621785   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.814806   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.976174   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.120440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:03.120863   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.313700   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.475835   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.619665   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.621704   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:03.814121   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.872588   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:03.975298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.120824   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:04.121184   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.314338   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.475429   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.620540   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.620584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:04.815162   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.976895   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.120594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:05.120730   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.315865   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.476472   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.619151   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.619193   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:05.814469   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.873045   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:05.976083   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.120276   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.121632   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:06.316445   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.476113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.619879   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.621235   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:06.817665   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.977266   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.121891   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:07.125370   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.314681   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.475319   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.622891   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:07.623130   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.815134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.975338   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.120092   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.121833   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:08.314857   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:08.372618   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:08.475633   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.620926   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.622347   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.022099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.022480   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.120725   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.120911   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.314632   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.476068   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.620093   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.621293   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.814918   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.982257   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.120692   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.121929   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:10.314650   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.475440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.621191   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:10.621624   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.814610   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.871823   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:10.975582   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.120349   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:11.121548   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.314255   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.475551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.619270   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.619644   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:11.813295   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.976245   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.121122   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:12.121879   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.314903   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.475397   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.620793   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:12.621162   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.814057   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.872130   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:12.975754   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.133769   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:13.134318   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.314790   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.477695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.622634   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:13.624847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.821501   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.976538   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.119646   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.120341   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:14.315173   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.475306   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.621185   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.621510   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:14.814467   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.872822   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:14.976294   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.120441   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:15.121127   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.315400   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.475388   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.620578   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.620953   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:15.813943   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.979488   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.121495   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:16.121576   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.314944   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.475455   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.620506   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.620558   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:16.813569   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.872856   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:16.975991   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.120803   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:17.125876   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.314160   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.475916   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.620075   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:17.621270   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.815155   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.981149   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.120629   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.120785   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:18.315019   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.476099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.620556   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.620934   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:18.814347   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.977438   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.120685   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:19.121338   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.315435   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.371445   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:19.475248   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.620321   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:19.620767   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.814394   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.975242   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.120360   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:20.120513   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.315529   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.484317   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.620297   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.620551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:20.814555   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.976127   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.120746   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.120965   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:21.315551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.372806   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:21.476774   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.620656   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.621401   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:21.814726   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.975838   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:22.122780   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.126273   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:22.314614   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.476790   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:22.619929   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.622675   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:22.814144   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.975643   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:23.119721   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.120559   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:23.315087   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.474923   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:23.619836   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.621736   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:23.813687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.871468   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:23.976699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:24.120045   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.123398   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:24.602840   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:24.603194   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.619810   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.621697   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:24.814715   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.975695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:25.120948   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:25.121392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.318299   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:25.476633   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:25.619392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.620445   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:25.814377   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:25.872649   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:25.976267   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:26.122178   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:26.122596   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:26.314825   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.474926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:26.620117   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:26.620392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:26.815236   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.976263   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:27.122244   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:27.126825   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:27.314503   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:27.475451   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:27.619077   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:27.620128   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:27.814505   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:27.976659   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:28.119847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:28.119956   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:28.315111   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:28.373901   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:28.477178   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:28.621847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:28.622419   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:28.814623   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:28.975971   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:29.120702   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:29.126856   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:29.333033   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:29.475641   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:29.620251   21098 kapi.go:107] duration metric: took 57.003845187s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:08:29.620894   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:29.813428   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:29.976100   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:30.120301   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:30.315054   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:30.475927   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:30.621321   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:30.816504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:30.873025   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:30.976290   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:31.120152   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:31.316147   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:31.476032   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:31.620260   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:31.816255   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:31.975740   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:32.122583   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:32.314298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:32.475815   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:32.620031   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:32.814337   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:32.873931   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:32.976076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:33.127234   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:33.313541   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:33.475361   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:33.619918   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:33.814036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:33.975222   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:34.119967   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:34.314700   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:34.476130   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:34.619753   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:34.815637   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:34.975904   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:35.119845   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:35.314907   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:35.372290   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:35.475061   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:35.620392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:35.814214   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:35.975293   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:36.120499   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:36.315134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:36.476924   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:36.625728   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:36.815568   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:36.975977   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:37.119760   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:37.314098   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:37.475403   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:37.619353   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:37.814409   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:37.872370   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:38.414352   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:38.422314   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:38.422534   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:38.475478   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:38.620548   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:38.814646   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:38.978424   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:39.120310   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:39.315834   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:39.476326   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:39.619867   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:39.813168   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:39.875054   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:39.983870   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:40.119802   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:40.381691   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:40.480228   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:40.621421   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:40.815148   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:40.975440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.119699   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:41.314866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:41.475833   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.619956   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:41.813677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:41.975111   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.121321   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:42.314456   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:42.372543   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:42.475460   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.619163   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:42.814929   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:42.975788   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.120305   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:43.314076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:43.475628   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.620272   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:43.822113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:43.976312   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.119884   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:44.319618   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:44.381557   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:44.477017   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.621506   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:44.826669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:44.976036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.123433   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:45.313890   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:45.476804   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.619848   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:45.813116   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:45.976701   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.119113   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:46.313958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:46.477472   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.620824   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:46.952945   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:46.956360   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:46.975185   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.120135   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:47.325549   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:47.476182   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.618992   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:47.815679   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:47.976615   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.119381   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:48.317018   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:48.476286   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.620330   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:48.814281   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:48.976023   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.119819   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:49.314898   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:49.372370   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:49.475523   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.679647   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:49.815584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:49.975653   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.119243   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:50.314821   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:50.493960   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.620412   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:50.814454   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:50.878784   21098 pod_ready.go:93] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"True"
	I0831 22:08:50.878806   21098 pod_ready.go:82] duration metric: took 1m17.013002962s for pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.878816   21098 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.884470   21098 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace has status "Ready":"True"
	I0831 22:08:50.884489   21098 pod_ready.go:82] duration metric: took 5.665136ms for pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.884509   21098 pod_ready.go:39] duration metric: took 1m18.251226521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:08:50.884533   21098 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:08:50.884580   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:08:50.884638   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:08:50.955600   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:08:50.955626   21098 cri.go:89] found id: ""
	I0831 22:08:50.955635   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:08:50.955684   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:50.971435   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:08:50.971500   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:08:50.979153   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:51.029305   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:08:51.029329   21098 cri.go:89] found id: ""
	I0831 22:08:51.029338   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:08:51.029396   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.033768   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:08:51.033831   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:08:51.108642   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:08:51.108669   21098 cri.go:89] found id: ""
	I0831 22:08:51.108680   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:08:51.108740   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.114938   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:08:51.115012   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:08:51.121354   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:51.227554   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:08:51.227577   21098 cri.go:89] found id: ""
	I0831 22:08:51.227585   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:08:51.227629   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.242323   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:08:51.242407   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:08:51.306299   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:08:51.306319   21098 cri.go:89] found id: ""
	I0831 22:08:51.306327   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:08:51.306389   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.316849   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:51.317332   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:08:51.317392   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:08:51.404448   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:08:51.404466   21098 cri.go:89] found id: ""
	I0831 22:08:51.404472   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:08:51.404524   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.411682   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:08:51.411753   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:08:51.468597   21098 cri.go:89] found id: ""
	I0831 22:08:51.468623   21098 logs.go:276] 0 containers: []
	W0831 22:08:51.468631   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:08:51.468639   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:08:51.468651   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 22:08:51.482196   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0831 22:08:51.533263   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.533431   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:51.533563   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.533721   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:51.545028   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.545188   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:08:51.564495   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:08:51.564525   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:08:51.624037   21098 kapi.go:107] duration metric: took 1m19.008743885s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:08:51.815909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:51.850860   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:08:51.850908   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:08:51.976237   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.014670   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:08:52.014708   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:08:52.123496   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:08:52.123543   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:08:52.174958   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:08:52.175006   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:08:52.267648   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:08:52.267686   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:08:52.313784   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:52.334510   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:08:52.334536   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:08:52.388833   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:08:52.388872   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:08:52.458242   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:08:52.458270   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:08:52.475384   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.552472   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:08:52.552502   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:08:52.850283   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:52.937891   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:08:52.937926   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:08:52.937989   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:08:52.938003   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:52.938015   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:52.938039   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:52.938050   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:08:52.938058   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:08:52.938065   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:08:52.938073   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:08:52.978298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.315067   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:53.475986   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.817131   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.151054   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.314831   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.476234   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.816394   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.975421   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.315703   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:55.482514   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.815728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:55.974892   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.314245   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:56.475975   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.814011   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:56.976504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.313628   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:57.475060   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.814335   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:57.976408   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.314175   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:58.475969   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.815045   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:58.975678   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.314157   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:59.475913   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.814537   21098 kapi.go:107] duration metric: took 1m25.505259155s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:08:59.976603   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.476062   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.976224   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.477863   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.975298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.476482   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.939628   21098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:09:02.961175   21098 api_server.go:72] duration metric: took 1m39.375038741s to wait for apiserver process to appear ...
	I0831 22:09:02.961200   21098 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:09:02.961237   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:09:02.961303   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:09:02.975877   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.999945   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:02.999964   21098 cri.go:89] found id: ""
	I0831 22:09:02.999971   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:09:03.000020   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.005045   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:09:03.005117   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:09:03.053454   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:03.053480   21098 cri.go:89] found id: ""
	I0831 22:09:03.053492   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:09:03.053548   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.057843   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:09:03.057918   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:09:03.102107   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:03.102134   21098 cri.go:89] found id: ""
	I0831 22:09:03.102144   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:09:03.102201   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.106758   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:09:03.106833   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:09:03.151303   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:03.151343   21098 cri.go:89] found id: ""
	I0831 22:09:03.151353   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:09:03.151431   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.155739   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:09:03.155817   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:09:03.212323   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:03.212348   21098 cri.go:89] found id: ""
	I0831 22:09:03.212357   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:09:03.212414   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.217064   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:09:03.217124   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:09:03.258208   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:03.258239   21098 cri.go:89] found id: ""
	I0831 22:09:03.258249   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:09:03.258311   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.262725   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:09:03.262794   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:09:03.304036   21098 cri.go:89] found id: ""
	I0831 22:09:03.304062   21098 logs.go:276] 0 containers: []
	W0831 22:09:03.304070   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:09:03.304077   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:09:03.304095   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:03.342633   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:09:03.342660   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:09:03.400297   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:09:03.400335   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:09:03.415806   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:09:03.415833   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:09:03.476498   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:03.538271   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:09:03.538303   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:03.602863   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:09:03.602897   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:03.663903   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:09:03.663936   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:03.737918   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:09:03.737948   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:03.788384   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:09:03.788419   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:09:03.838952   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.839121   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:03.839261   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.839450   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:03.850735   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.850895   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:03.871047   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:09:03.871072   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:03.931950   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:09:03.931983   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:09:03.975839   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.476679   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.492557   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:04.492594   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:09:04.492657   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:09:04.492672   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:04.492685   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:04.492696   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:04.492705   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:04.492716   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:04.492725   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:04.492737   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:09:04.975687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.475569   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.975871   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.476108   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.975461   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.476261   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.976037   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.475699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.975874   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.476000   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.975995   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:10.475521   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.195175   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.476002   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.975232   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.476158   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.975602   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.475134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.976504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.475926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.493799   21098 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0831 22:09:14.501337   21098 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0831 22:09:14.502516   21098 api_server.go:141] control plane version: v1.31.0
	I0831 22:09:14.502536   21098 api_server.go:131] duration metric: took 11.541329499s to wait for apiserver health ...
	I0831 22:09:14.502547   21098 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:09:14.502568   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:09:14.502621   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:09:14.542688   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:14.542712   21098 cri.go:89] found id: ""
	I0831 22:09:14.542721   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:09:14.542778   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.547207   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:09:14.547265   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:09:14.585253   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:14.585277   21098 cri.go:89] found id: ""
	I0831 22:09:14.585285   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:09:14.585348   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.589951   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:09:14.590001   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:09:14.634151   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:14.634171   21098 cri.go:89] found id: ""
	I0831 22:09:14.634178   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:09:14.634221   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.640116   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:09:14.640196   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:09:14.692606   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:14.692629   21098 cri.go:89] found id: ""
	I0831 22:09:14.692636   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:09:14.692684   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.699229   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:09:14.699294   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:09:14.736751   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:14.736777   21098 cri.go:89] found id: ""
	I0831 22:09:14.736785   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:09:14.736838   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.741521   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:09:14.741573   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:09:14.780419   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:14.780448   21098 cri.go:89] found id: ""
	I0831 22:09:14.780456   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:09:14.780501   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.785331   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:09:14.785397   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:09:14.832330   21098 cri.go:89] found id: ""
	I0831 22:09:14.832353   21098 logs.go:276] 0 containers: []
	W0831 22:09:14.832362   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:09:14.832371   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:09:14.832385   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:09:14.849233   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:09:14.849266   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:14.894187   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:09:14.894215   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:14.932967   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:09:14.933040   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:09:14.975669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.995013   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:09:14.995045   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:15.054114   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:09:15.054155   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:09:15.476598   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:15.938089   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:09:15.938136   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 22:09:15.975959   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0831 22:09:15.992400   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:15.992568   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:15.992739   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:15.992917   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.005184   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.005355   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:16.027347   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:09:16.027382   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:09:16.173595   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:09:16.173623   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:16.260126   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:09:16.260162   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:16.304110   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:09:16.304147   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:16.351377   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:16.351404   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:09:16.351460   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:09:16.351474   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.351483   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.351493   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.351510   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.351521   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:16.351531   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:16.351541   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:09:16.477457   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:16.975815   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.475770   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.979376   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.475592   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.976801   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.476121   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.977073   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.475240   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.976681   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.475484   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.976058   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.475479   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.975925   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.475911   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.976177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.475909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.975151   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.476109   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.975695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.362028   21098 system_pods.go:59] 18 kube-system pods found
	I0831 22:09:26.362061   21098 system_pods.go:61] "coredns-6f6b679f8f-fg5wn" [44101eb2-e5ab-4205-8770-fcd8e3e7c877] Running
	I0831 22:09:26.362066   21098 system_pods.go:61] "csi-hostpath-attacher-0" [d5e59cee-4aef-4a71-8e87-a17016deb8aa] Running
	I0831 22:09:26.362070   21098 system_pods.go:61] "csi-hostpath-resizer-0" [1472dd5a-623f-4e1b-bb88-aa9737965d61] Running
	I0831 22:09:26.362073   21098 system_pods.go:61] "csi-hostpathplugin-f9r7t" [c332f2e3-d867-4e1b-b27f-62b8ff234fb8] Running
	I0831 22:09:26.362077   21098 system_pods.go:61] "etcd-addons-132210" [78c4bd71-140b-49f9-8bc1-4b4e1f3e77e1] Running
	I0831 22:09:26.362080   21098 system_pods.go:61] "kube-apiserver-addons-132210" [266d225a-02ab-4449-bc78-88940e2e01be] Running
	I0831 22:09:26.362083   21098 system_pods.go:61] "kube-controller-manager-addons-132210" [efd3eb72-530e-4d83-9f80-ed4252c65edb] Running
	I0831 22:09:26.362086   21098 system_pods.go:61] "kube-ingress-dns-minikube" [0e0b7880-36a9-4588-b4f2-69ee4d28f341] Running
	I0831 22:09:26.362089   21098 system_pods.go:61] "kube-proxy-pf4zb" [d398a8b8-eef4-41b1-945b-bf73a594737e] Running
	I0831 22:09:26.362092   21098 system_pods.go:61] "kube-scheduler-addons-132210" [40d172ae-efff-4b60-b47f-86e58c381de7] Running
	I0831 22:09:26.362095   21098 system_pods.go:61] "metrics-server-84c5f94fbc-4mp2p" [9f5c8bca-8c7c-4216-b875-066e9a9fb36a] Running
	I0831 22:09:26.362099   21098 system_pods.go:61] "nvidia-device-plugin-daemonset-99v85" [54398aec-2cfe-4328-a845-e1bd4bbfc99f] Running
	I0831 22:09:26.362102   21098 system_pods.go:61] "registry-6fb4cdfc84-gxktn" [1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78] Running
	I0831 22:09:26.362105   21098 system_pods.go:61] "registry-proxy-n7rzz" [49867dc1-8d92-48f0-8c8b-50a65936ad12] Running
	I0831 22:09:26.362108   21098 system_pods.go:61] "snapshot-controller-56fcc65765-d8zmh" [842cfb93-bc24-4a0f-8191-8cff822e4981] Running
	I0831 22:09:26.362111   21098 system_pods.go:61] "snapshot-controller-56fcc65765-vz7w2" [879946b9-6f92-4ad5-8e18-84154122b30a] Running
	I0831 22:09:26.362115   21098 system_pods.go:61] "storage-provisioner" [7444df94-b591-414e-bb8f-6eecc8fb06c5] Running
	I0831 22:09:26.362119   21098 system_pods.go:61] "tiller-deploy-b48cc5f79-lljvg" [d3d10da4-8063-4e9f-a3a6-d02d24b61855] Running
	I0831 22:09:26.362128   21098 system_pods.go:74] duration metric: took 11.859574121s to wait for pod list to return data ...
	I0831 22:09:26.362140   21098 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:09:26.364694   21098 default_sa.go:45] found service account: "default"
	I0831 22:09:26.364718   21098 default_sa.go:55] duration metric: took 2.572024ms for default service account to be created ...
	I0831 22:09:26.364726   21098 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:09:26.371946   21098 system_pods.go:86] 18 kube-system pods found
	I0831 22:09:26.371979   21098 system_pods.go:89] "coredns-6f6b679f8f-fg5wn" [44101eb2-e5ab-4205-8770-fcd8e3e7c877] Running
	I0831 22:09:26.371985   21098 system_pods.go:89] "csi-hostpath-attacher-0" [d5e59cee-4aef-4a71-8e87-a17016deb8aa] Running
	I0831 22:09:26.371989   21098 system_pods.go:89] "csi-hostpath-resizer-0" [1472dd5a-623f-4e1b-bb88-aa9737965d61] Running
	I0831 22:09:26.371993   21098 system_pods.go:89] "csi-hostpathplugin-f9r7t" [c332f2e3-d867-4e1b-b27f-62b8ff234fb8] Running
	I0831 22:09:26.371997   21098 system_pods.go:89] "etcd-addons-132210" [78c4bd71-140b-49f9-8bc1-4b4e1f3e77e1] Running
	I0831 22:09:26.372000   21098 system_pods.go:89] "kube-apiserver-addons-132210" [266d225a-02ab-4449-bc78-88940e2e01be] Running
	I0831 22:09:26.372003   21098 system_pods.go:89] "kube-controller-manager-addons-132210" [efd3eb72-530e-4d83-9f80-ed4252c65edb] Running
	I0831 22:09:26.372007   21098 system_pods.go:89] "kube-ingress-dns-minikube" [0e0b7880-36a9-4588-b4f2-69ee4d28f341] Running
	I0831 22:09:26.372011   21098 system_pods.go:89] "kube-proxy-pf4zb" [d398a8b8-eef4-41b1-945b-bf73a594737e] Running
	I0831 22:09:26.372014   21098 system_pods.go:89] "kube-scheduler-addons-132210" [40d172ae-efff-4b60-b47f-86e58c381de7] Running
	I0831 22:09:26.372017   21098 system_pods.go:89] "metrics-server-84c5f94fbc-4mp2p" [9f5c8bca-8c7c-4216-b875-066e9a9fb36a] Running
	I0831 22:09:26.372020   21098 system_pods.go:89] "nvidia-device-plugin-daemonset-99v85" [54398aec-2cfe-4328-a845-e1bd4bbfc99f] Running
	I0831 22:09:26.372023   21098 system_pods.go:89] "registry-6fb4cdfc84-gxktn" [1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78] Running
	I0831 22:09:26.372046   21098 system_pods.go:89] "registry-proxy-n7rzz" [49867dc1-8d92-48f0-8c8b-50a65936ad12] Running
	I0831 22:09:26.372053   21098 system_pods.go:89] "snapshot-controller-56fcc65765-d8zmh" [842cfb93-bc24-4a0f-8191-8cff822e4981] Running
	I0831 22:09:26.372057   21098 system_pods.go:89] "snapshot-controller-56fcc65765-vz7w2" [879946b9-6f92-4ad5-8e18-84154122b30a] Running
	I0831 22:09:26.372060   21098 system_pods.go:89] "storage-provisioner" [7444df94-b591-414e-bb8f-6eecc8fb06c5] Running
	I0831 22:09:26.372063   21098 system_pods.go:89] "tiller-deploy-b48cc5f79-lljvg" [d3d10da4-8063-4e9f-a3a6-d02d24b61855] Running
	I0831 22:09:26.372068   21098 system_pods.go:126] duration metric: took 7.338208ms to wait for k8s-apps to be running ...
	I0831 22:09:26.372077   21098 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:09:26.372143   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:09:26.387943   21098 system_svc.go:56] duration metric: took 15.858116ms WaitForService to wait for kubelet
	I0831 22:09:26.387974   21098 kubeadm.go:582] duration metric: took 2m2.801840351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:09:26.387995   21098 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:09:26.390995   21098 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:09:26.391021   21098 node_conditions.go:123] node cpu capacity is 2
	I0831 22:09:26.391033   21098 node_conditions.go:105] duration metric: took 3.032634ms to run NodePressure ...
	I0831 22:09:26.391043   21098 start.go:241] waiting for startup goroutines ...
	I0831 22:09:26.475914   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.975777   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.476954   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.975206   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.476090   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.975734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.475698   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.976296   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.476559   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.975576   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.477596   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.975909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.475130   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.975291   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.476041   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.975866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.475356   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.976258   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.475594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.975538   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.475516   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.975882   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.475912   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.980397   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.476464   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.976629   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.476682   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.977594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.476050   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.975586   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.476076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.988997   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.475034   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.976591   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.476154   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.975736   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.476250   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.976670   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.476952   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.975160   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.475606   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.976118   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.476033   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.975996   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.475583   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.976184   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.475823   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.975703   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.476541   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.976407   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:51.476083   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:51.976078   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:52.475636   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:52.977028   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:53.475427   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:53.976231   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:54.475762   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:54.975423   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:55.480634   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:55.976191   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:56.475501   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:56.976688   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:57.477084   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:57.975727   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:58.476734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:58.975704   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:59.475793   21098 kapi.go:107] duration metric: took 2m23.503891799s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:09:59.477292   21098 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-132210 cluster.
	I0831 22:09:59.478644   21098 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:09:59.479814   21098 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:09:59.481180   21098 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, nvidia-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, helm-tiller, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0831 22:09:59.482381   21098 addons.go:510] duration metric: took 2m35.8961992s for enable addons: enabled=[cloud-spanner default-storageclass nvidia-device-plugin storage-provisioner ingress-dns inspektor-gadget helm-tiller metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0831 22:09:59.482411   21098 start.go:246] waiting for cluster config update ...
	I0831 22:09:59.482427   21098 start.go:255] writing updated cluster config ...
	I0831 22:09:59.482654   21098 ssh_runner.go:195] Run: rm -f paused
	I0831 22:09:59.531140   21098 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:09:59.533137   21098 out.go:177] * Done! kubectl is now configured to use "addons-132210" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.342255737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142906342227521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e85233c1-9fcc-4c7b-a77b-f3dc6aa2446b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.342772168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8822193c-bb1b-43e9-82f9-32a2a95df088 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.342827821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8822193c-bb1b-43e9-82f9-32a2a95df088 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.343285125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7458fbd94e22ffa91cad2483d94657266f045e1cd14f703942f5fdd4dfcd5346,PodSandboxId:efbe2df8b713ce5f1978dacc3d8bc60dc8e8abed9ce5c7a1a3de86e89fd988c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725142899501781859,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bh4sk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58f13e00-b249-4877-a309-dba5324d1975,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f11da1cb74f181b238eb28dbf9d14c991a5b24d9355d2661753e69c7566cd5,PodSandboxId:88099b0ca1ae0809c0730e0a5318fa453aa4b2f35b98d96565b3807d3328aed1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725142757309219116,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c9d33c8-b37d-4376-9ade-e9dcf4168c22,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d,PodSandboxId:1112f04477239476ea91fec81c7f9ba331f6888492941361381dcc822fc0c767,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1725142108404872769,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wr2c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b310420-abf8-48e1-8b44-b000e6d4e2de,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18,PodSandboxId:7f4d1f645053746ac9abd9874df3926c878a72503fbde5c511cc06b05006c8b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142093694602877,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lffjf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e70949d6-004f-45a1-95b4-cda03aefe9de,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa
088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a
9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f4
0292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d
,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-schedule
r,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8822193c-bb1b-43e9-82f9-32a2a95df088 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.382159698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9827e972-d0f4-489e-9ff5-ceba8d54dcb4 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.382232376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9827e972-d0f4-489e-9ff5-ceba8d54dcb4 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.383694077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf5ef82e-2394-4a88-a9da-a5fee8fb75ca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.385364448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142906385336834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf5ef82e-2394-4a88-a9da-a5fee8fb75ca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.386228499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7148dc3f-3b82-45df-8146-796885642e59 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.386283660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7148dc3f-3b82-45df-8146-796885642e59 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.386602376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7458fbd94e22ffa91cad2483d94657266f045e1cd14f703942f5fdd4dfcd5346,PodSandboxId:efbe2df8b713ce5f1978dacc3d8bc60dc8e8abed9ce5c7a1a3de86e89fd988c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725142899501781859,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bh4sk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58f13e00-b249-4877-a309-dba5324d1975,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f11da1cb74f181b238eb28dbf9d14c991a5b24d9355d2661753e69c7566cd5,PodSandboxId:88099b0ca1ae0809c0730e0a5318fa453aa4b2f35b98d96565b3807d3328aed1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725142757309219116,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c9d33c8-b37d-4376-9ade-e9dcf4168c22,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d,PodSandboxId:1112f04477239476ea91fec81c7f9ba331f6888492941361381dcc822fc0c767,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1725142108404872769,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wr2c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b310420-abf8-48e1-8b44-b000e6d4e2de,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18,PodSandboxId:7f4d1f645053746ac9abd9874df3926c878a72503fbde5c511cc06b05006c8b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142093694602877,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lffjf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e70949d6-004f-45a1-95b4-cda03aefe9de,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa
088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a
9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f4
0292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d
,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-schedule
r,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7148dc3f-3b82-45df-8146-796885642e59 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.425328731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bba690a-03cf-4f62-b003-e24f23e68560 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.425416589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bba690a-03cf-4f62-b003-e24f23e68560 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.426552350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30166515-6a16-4774-9fc5-278d3eebf899 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.427704177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142906427678193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30166515-6a16-4774-9fc5-278d3eebf899 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.428627935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77f5bcc3-0928-4bb9-9b83-1f1d240920c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.428814036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77f5bcc3-0928-4bb9-9b83-1f1d240920c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.429632817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7458fbd94e22ffa91cad2483d94657266f045e1cd14f703942f5fdd4dfcd5346,PodSandboxId:efbe2df8b713ce5f1978dacc3d8bc60dc8e8abed9ce5c7a1a3de86e89fd988c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725142899501781859,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bh4sk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58f13e00-b249-4877-a309-dba5324d1975,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f11da1cb74f181b238eb28dbf9d14c991a5b24d9355d2661753e69c7566cd5,PodSandboxId:88099b0ca1ae0809c0730e0a5318fa453aa4b2f35b98d96565b3807d3328aed1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725142757309219116,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c9d33c8-b37d-4376-9ade-e9dcf4168c22,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d,PodSandboxId:1112f04477239476ea91fec81c7f9ba331f6888492941361381dcc822fc0c767,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1725142108404872769,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wr2c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b310420-abf8-48e1-8b44-b000e6d4e2de,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18,PodSandboxId:7f4d1f645053746ac9abd9874df3926c878a72503fbde5c511cc06b05006c8b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142093694602877,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lffjf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e70949d6-004f-45a1-95b4-cda03aefe9de,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa
088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a
9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f4
0292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d
,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-schedule
r,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77f5bcc3-0928-4bb9-9b83-1f1d240920c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.471499722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9a13360-f0c2-43bb-9b69-aeddb0009f10 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.471586202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9a13360-f0c2-43bb-9b69-aeddb0009f10 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.472548692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a69beaa-46bc-410f-8c93-7320a7f24395 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.473731265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142906473700936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a69beaa-46bc-410f-8c93-7320a7f24395 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.474365798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca9c3029-2469-4e04-b6a2-8fe8f5caa934 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.474421492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca9c3029-2469-4e04-b6a2-8fe8f5caa934 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:21:46 addons-132210 crio[663]: time="2024-08-31 22:21:46.474757604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7458fbd94e22ffa91cad2483d94657266f045e1cd14f703942f5fdd4dfcd5346,PodSandboxId:efbe2df8b713ce5f1978dacc3d8bc60dc8e8abed9ce5c7a1a3de86e89fd988c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725142899501781859,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bh4sk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58f13e00-b249-4877-a309-dba5324d1975,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f11da1cb74f181b238eb28dbf9d14c991a5b24d9355d2661753e69c7566cd5,PodSandboxId:88099b0ca1ae0809c0730e0a5318fa453aa4b2f35b98d96565b3807d3328aed1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725142757309219116,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c9d33c8-b37d-4376-9ade-e9dcf4168c22,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d,PodSandboxId:1112f04477239476ea91fec81c7f9ba331f6888492941361381dcc822fc0c767,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1725142108404872769,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wr2c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b310420-abf8-48e1-8b44-b000e6d4e2de,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18,PodSandboxId:7f4d1f645053746ac9abd9874df3926c878a72503fbde5c511cc06b05006c8b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725142093694602877,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lffjf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e70949d6-004f-45a1-95b4-cda03aefe9de,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa
088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a
9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f4
0292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d
,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-schedule
r,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca9c3029-2469-4e04-b6a2-8fe8f5caa934 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7458fbd94e22f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   efbe2df8b713c       hello-world-app-55bf9c44b4-bh4sk
	23f11da1cb74f       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   88099b0ca1ae0       nginx
	e8dcfee929f65       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   dc2ee3e74ad94       headlamp-57fb76fcdb-zb4l7
	a5e788d23e628       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   a65bfb6d507f4       gcp-auth-89d5ffd79-6n2z6
	03905d71943c4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   13 minutes ago      Exited              patch                     0                   1112f04477239       ingress-nginx-admission-patch-5wr2c
	833daa1d9c053       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   13 minutes ago      Exited              create                    0                   7f4d1f6450537       ingress-nginx-admission-create-lffjf
	7ef4a6c40dbe3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        14 minutes ago      Running             metrics-server            0                   c04f5bd826354       metrics-server-84c5f94fbc-4mp2p
	0b70bc07a6fec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             14 minutes ago      Running             storage-provisioner       0                   e7805858822ce       storage-provisioner
	8bb7c1b21e074       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             14 minutes ago      Running             coredns                   0                   c9d76344783a2       coredns-6f6b679f8f-fg5wn
	dc9d1779c9ec0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             14 minutes ago      Running             kube-proxy                0                   cd53e58a6020b       kube-proxy-pf4zb
	88f24112cdf2e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             14 minutes ago      Running             kube-controller-manager   0                   1cce6cbc6a4fa       kube-controller-manager-addons-132210
	d5a6630200902       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             14 minutes ago      Running             kube-apiserver            0                   e2253778a2445       kube-apiserver-addons-132210
	9e07eecb0bd41       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             14 minutes ago      Running             etcd                      0                   54cbd2b4b9e2e       etcd-addons-132210
	ea40b4dfb934e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             14 minutes ago      Running             kube-scheduler            0                   3f1a88db7a62d       kube-scheduler-addons-132210
	
	
	==> coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] <==
	[INFO] 10.244.0.8:59871 - 44836 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110421s
	[INFO] 10.244.0.8:33356 - 42014 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135624s
	[INFO] 10.244.0.8:33356 - 8221 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000242056s
	[INFO] 10.244.0.8:35585 - 13377 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041231s
	[INFO] 10.244.0.8:35585 - 3142 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000183049s
	[INFO] 10.244.0.8:47934 - 56724 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038372s
	[INFO] 10.244.0.8:47934 - 6297 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105405s
	[INFO] 10.244.0.8:48416 - 43339 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095854s
	[INFO] 10.244.0.8:48416 - 20808 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089325s
	[INFO] 10.244.0.8:60809 - 24507 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090972s
	[INFO] 10.244.0.8:60809 - 27316 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000241444s
	[INFO] 10.244.0.8:39141 - 61060 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013393s
	[INFO] 10.244.0.8:39141 - 6786 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000294732s
	[INFO] 10.244.0.8:47336 - 11940 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039145s
	[INFO] 10.244.0.8:47336 - 21158 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101081s
	[INFO] 10.244.0.8:36849 - 58078 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000195322s
	[INFO] 10.244.0.8:36849 - 19164 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000290198s
	[INFO] 10.244.0.22:57715 - 978 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363634s
	[INFO] 10.244.0.22:36290 - 10290 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000102337s
	[INFO] 10.244.0.22:59607 - 56162 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068575s
	[INFO] 10.244.0.22:57832 - 20486 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115987s
	[INFO] 10.244.0.22:47101 - 58158 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072188s
	[INFO] 10.244.0.22:54115 - 35881 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059499s
	[INFO] 10.244.0.22:38928 - 44111 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.003739828s
	[INFO] 10.244.0.22:51045 - 42584 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003766695s
	
	
	==> describe nodes <==
	Name:               addons-132210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-132210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-132210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_07_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-132210
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:07:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-132210
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:21:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:19:24 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:19:24 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:19:24 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:19:24 +0000   Sat, 31 Aug 2024 22:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    addons-132210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 12c3f930f06943eb9eedcbe740b437c1
	  System UUID:                12c3f930-f069-43eb-9eed-cbe740b437c1
	  Boot ID:                    0c2dfdc3-b8db-4280-8b08-729176a830ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-bh4sk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  gcp-auth                    gcp-auth-89d5ffd79-6n2z6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  headlamp                    headlamp-57fb76fcdb-zb4l7                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 coredns-6f6b679f8f-fg5wn                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-132210                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-132210             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-132210    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-pf4zb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-132210             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-4mp2p          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-132210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-132210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-132210 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-132210 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-132210 event: Registered Node addons-132210 in Controller
	
	
	==> dmesg <==
	[Aug31 22:08] kauditd_printk_skb: 41 callbacks suppressed
	[ +10.213253] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.886474] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.896138] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.604296] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.756625] kauditd_printk_skb: 12 callbacks suppressed
	[Aug31 22:09] kauditd_printk_skb: 12 callbacks suppressed
	[ +32.975043] kauditd_printk_skb: 32 callbacks suppressed
	[ +15.460927] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.545206] kauditd_printk_skb: 2 callbacks suppressed
	[Aug31 22:10] kauditd_printk_skb: 9 callbacks suppressed
	[Aug31 22:11] kauditd_printk_skb: 28 callbacks suppressed
	[Aug31 22:14] kauditd_printk_skb: 28 callbacks suppressed
	[Aug31 22:18] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.322338] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.045430] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.602574] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.435262] kauditd_printk_skb: 2 callbacks suppressed
	[ +19.828071] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.293926] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.470597] kauditd_printk_skb: 6 callbacks suppressed
	[Aug31 22:19] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.179886] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.551519] kauditd_printk_skb: 41 callbacks suppressed
	[Aug31 22:21] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] <==
	{"level":"warn","ts":"2024-08-31T22:08:38.406522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.287295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:38.406577Z","caller":"traceutil/trace.go:171","msg":"trace[1395541970] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"105.345528ms","start":"2024-08-31T22:08:38.301224Z","end":"2024-08-31T22:08:38.406569Z","steps":["trace[1395541970] 'agreement among raft nodes before linearized reading'  (duration: 105.278207ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:40.364527Z","caller":"traceutil/trace.go:171","msg":"trace[449812390] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"147.838759ms","start":"2024-08-31T22:08:40.216672Z","end":"2024-08-31T22:08:40.364511Z","steps":["trace[449812390] 'process raft request'  (duration: 147.722077ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:46.932596Z","caller":"traceutil/trace.go:171","msg":"trace[1095917297] linearizableReadLoop","detail":"{readStateIndex:1115; appliedIndex:1114; }","duration":"131.800969ms","start":"2024-08-31T22:08:46.800782Z","end":"2024-08-31T22:08:46.932583Z","steps":["trace[1095917297] 'read index received'  (duration: 131.639117ms)","trace[1095917297] 'applied index is now lower than readState.Index'  (duration: 161.4µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:08:46.932831Z","caller":"traceutil/trace.go:171","msg":"trace[219276231] transaction","detail":"{read_only:false; response_revision:1084; number_of_response:1; }","duration":"225.630773ms","start":"2024-08-31T22:08:46.707192Z","end":"2024-08-31T22:08:46.932823Z","steps":["trace[219276231] 'process raft request'  (duration: 225.308004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.268644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933105Z","caller":"traceutil/trace.go:171","msg":"trace[850287395] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1084; }","duration":"132.320492ms","start":"2024-08-31T22:08:46.800778Z","end":"2024-08-31T22:08:46.933098Z","steps":["trace[850287395] 'agreement among raft nodes before linearized reading'  (duration: 132.252602ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.180196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933247Z","caller":"traceutil/trace.go:171","msg":"trace[660106792] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1084; }","duration":"123.218846ms","start":"2024-08-31T22:08:46.810023Z","end":"2024-08-31T22:08:46.933242Z","steps":["trace[660106792] 'agreement among raft nodes before linearized reading'  (duration: 123.16896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.542858ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933623Z","caller":"traceutil/trace.go:171","msg":"trace[330872549] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1084; }","duration":"108.584553ms","start":"2024-08-31T22:08:46.825032Z","end":"2024-08-31T22:08:46.933616Z","steps":["trace[330872549] 'agreement among raft nodes before linearized reading'  (duration: 108.535322ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:49.655357Z","caller":"traceutil/trace.go:171","msg":"trace[165319690] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"136.350729ms","start":"2024-08-31T22:08:49.518991Z","end":"2024-08-31T22:08:49.655342Z","steps":["trace[165319690] 'process raft request'  (duration: 136.128055ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:49.661370Z","caller":"traceutil/trace.go:171","msg":"trace[1593117983] transaction","detail":"{read_only:false; response_revision:1101; number_of_response:1; }","duration":"135.493651ms","start":"2024-08-31T22:08:49.525861Z","end":"2024-08-31T22:08:49.661354Z","steps":["trace[1593117983] 'process raft request'  (duration: 134.988688ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:54.136074Z","caller":"traceutil/trace.go:171","msg":"trace[1104677073] linearizableReadLoop","detail":"{readStateIndex:1165; appliedIndex:1164; }","duration":"172.969109ms","start":"2024-08-31T22:08:53.963035Z","end":"2024-08-31T22:08:54.136004Z","steps":["trace[1104677073] 'read index received'  (duration: 170.41125ms)","trace[1104677073] 'applied index is now lower than readState.Index'  (duration: 2.557067ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-31T22:08:54.136319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.226891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:54.136413Z","caller":"traceutil/trace.go:171","msg":"trace[851686441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"173.346413ms","start":"2024-08-31T22:08:53.963007Z","end":"2024-08-31T22:08:54.136353Z","steps":["trace[851686441] 'agreement among raft nodes before linearized reading'  (duration: 173.201927ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:09:11.180801Z","caller":"traceutil/trace.go:171","msg":"trace[143927082] linearizableReadLoop","detail":"{readStateIndex:1232; appliedIndex:1231; }","duration":"217.79961ms","start":"2024-08-31T22:09:10.962974Z","end":"2024-08-31T22:09:11.180774Z","steps":["trace[143927082] 'read index received'  (duration: 217.657091ms)","trace[143927082] 'applied index is now lower than readState.Index'  (duration: 142.006µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:09:11.180954Z","caller":"traceutil/trace.go:171","msg":"trace[41968220] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"247.07813ms","start":"2024-08-31T22:09:10.933868Z","end":"2024-08-31T22:09:11.180946Z","steps":["trace[41968220] 'process raft request'  (duration: 246.800851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:09:11.181156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.482027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"warn","ts":"2024-08-31T22:09:11.181231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.247568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:09:11.181305Z","caller":"traceutil/trace.go:171","msg":"trace[497721371] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1196; }","duration":"218.327497ms","start":"2024-08-31T22:09:10.962970Z","end":"2024-08-31T22:09:11.181277Z","steps":["trace[497721371] 'agreement among raft nodes before linearized reading'  (duration: 218.228122ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:09:11.181240Z","caller":"traceutil/trace.go:171","msg":"trace[450022890] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1196; }","duration":"132.57275ms","start":"2024-08-31T22:09:11.048648Z","end":"2024-08-31T22:09:11.181221Z","steps":["trace[450022890] 'agreement among raft nodes before linearized reading'  (duration: 132.417556ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:17:14.568202Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1526}
	{"level":"info","ts":"2024-08-31T22:17:14.607762Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1526,"took":"38.547549ms","hash":33265301,"current-db-size-bytes":6266880,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3313664,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-08-31T22:17:14.607883Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":33265301,"revision":1526,"compact-revision":-1}
	
	
	==> gcp-auth [a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0] <==
	2024/08/31 22:09:59 Ready to write response ...
	2024/08/31 22:18:02 Ready to marshal response ...
	2024/08/31 22:18:02 Ready to write response ...
	2024/08/31 22:18:02 Ready to marshal response ...
	2024/08/31 22:18:02 Ready to write response ...
	2024/08/31 22:18:13 Ready to marshal response ...
	2024/08/31 22:18:13 Ready to write response ...
	2024/08/31 22:18:14 Ready to marshal response ...
	2024/08/31 22:18:14 Ready to write response ...
	2024/08/31 22:18:18 Ready to marshal response ...
	2024/08/31 22:18:18 Ready to write response ...
	2024/08/31 22:18:38 Ready to marshal response ...
	2024/08/31 22:18:38 Ready to write response ...
	2024/08/31 22:18:59 Ready to marshal response ...
	2024/08/31 22:18:59 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:10 Ready to marshal response ...
	2024/08/31 22:19:10 Ready to write response ...
	2024/08/31 22:21:36 Ready to marshal response ...
	2024/08/31 22:21:36 Ready to write response ...
	
	
	==> kernel <==
	 22:21:46 up 15 min,  0 users,  load average: 0.14, 0.45, 0.43
	Linux addons-132210 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] <==
	E0831 22:08:55.517599       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0831 22:08:55.517717       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.101.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.101.143:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I0831 22:08:55.540025       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0831 22:18:30.350419       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0831 22:18:31.825010       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:18:54.356239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.356735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.443718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.443780       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.469865       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.470384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.501684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.501737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:18:55.472783       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:18:55.502100       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0831 22:18:55.519265       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0831 22:19:04.668043       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0831 22:19:05.793178       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0831 22:19:06.516419       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.208.123"}
	I0831 22:19:10.568572       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0831 22:19:10.763197       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.55.157"}
	I0831 22:21:36.578157       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.3.72"}
	
	
	==> kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] <==
	W0831 22:20:20.145716       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:20:20.145835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:20:26.810992       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:20:26.811048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:20:48.767448       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:20:48.767581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:20:57.111838       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:20:57.111955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:21:12.696381       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:21:12.696434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:21:17.593714       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:21:17.593830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:21:33.564951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:21:33.565000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:21:33.611165       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:21:33.611264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:21:36.420367       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="63.613654ms"
	I0831 22:21:36.427538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.114721ms"
	I0831 22:21:36.441530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.906978ms"
	I0831 22:21:36.441603       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.2µs"
	I0831 22:21:38.498964       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0831 22:21:38.503343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.917µs"
	I0831 22:21:38.507293       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0831 22:21:39.905599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.138637ms"
	I0831 22:21:39.907724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="92.839µs"
	
	
	==> kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:07:25.903033       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:07:25.911310       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0831 22:07:25.911403       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:07:25.982344       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:07:25.982403       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:07:25.982435       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:07:25.985880       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:07:25.986197       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:07:25.986208       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:07:25.987987       1 config.go:197] "Starting service config controller"
	I0831 22:07:25.988004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:07:25.988023       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:07:25.988027       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:07:25.988362       1 config.go:326] "Starting node config controller"
	I0831 22:07:25.988369       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:07:26.089133       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:07:26.089163       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:07:26.089183       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] <==
	E0831 22:07:16.254732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0831 22:07:16.241012       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:07:17.051102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:07:17.051135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.097676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:07:17.097729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.116710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:07:17.116759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.238680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:07:17.238731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.308444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:07:17.308680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.361218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:07:17.361749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.445778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:07:17.445880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.451014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:07:17.451126       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:07:17.464610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:07:17.464787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.482630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:07:17.482757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.545180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:07:17.545318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:07:19.433315       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:21:36 addons-132210 kubelet[1197]: E0831 22:21:36.671292    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4d4c7d4f-e101-4a1a-8b8f-6d8a0cd8de3f"
	Aug 31 22:21:37 addons-132210 kubelet[1197]: I0831 22:21:37.597220    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2mk4\" (UniqueName: \"kubernetes.io/projected/0e0b7880-36a9-4588-b4f2-69ee4d28f341-kube-api-access-l2mk4\") pod \"0e0b7880-36a9-4588-b4f2-69ee4d28f341\" (UID: \"0e0b7880-36a9-4588-b4f2-69ee4d28f341\") "
	Aug 31 22:21:37 addons-132210 kubelet[1197]: I0831 22:21:37.601872    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0b7880-36a9-4588-b4f2-69ee4d28f341-kube-api-access-l2mk4" (OuterVolumeSpecName: "kube-api-access-l2mk4") pod "0e0b7880-36a9-4588-b4f2-69ee4d28f341" (UID: "0e0b7880-36a9-4588-b4f2-69ee4d28f341"). InnerVolumeSpecName "kube-api-access-l2mk4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:21:37 addons-132210 kubelet[1197]: I0831 22:21:37.698407    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l2mk4\" (UniqueName: \"kubernetes.io/projected/0e0b7880-36a9-4588-b4f2-69ee4d28f341-kube-api-access-l2mk4\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:21:37 addons-132210 kubelet[1197]: I0831 22:21:37.858344    1197 scope.go:117] "RemoveContainer" containerID="b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271"
	Aug 31 22:21:37 addons-132210 kubelet[1197]: I0831 22:21:37.886801    1197 scope.go:117] "RemoveContainer" containerID="b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271"
	Aug 31 22:21:37 addons-132210 kubelet[1197]: E0831 22:21:37.890086    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271\": container with ID starting with b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271 not found: ID does not exist" containerID="b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271"
	Aug 31 22:21:37 addons-132210 kubelet[1197]: I0831 22:21:37.890120    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271"} err="failed to get container status \"b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271\": rpc error: code = NotFound desc = could not find container \"b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271\": container with ID starting with b3036ecaa0d68c34e204368ca2d8349568e607424189abf93a6dd4e10ba0f271 not found: ID does not exist"
	Aug 31 22:21:38 addons-132210 kubelet[1197]: I0831 22:21:38.674140    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0b7880-36a9-4588-b4f2-69ee4d28f341" path="/var/lib/kubelet/pods/0e0b7880-36a9-4588-b4f2-69ee4d28f341/volumes"
	Aug 31 22:21:38 addons-132210 kubelet[1197]: I0831 22:21:38.674832    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b310420-abf8-48e1-8b44-b000e6d4e2de" path="/var/lib/kubelet/pods/7b310420-abf8-48e1-8b44-b000e6d4e2de/volumes"
	Aug 31 22:21:38 addons-132210 kubelet[1197]: I0831 22:21:38.675322    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70949d6-004f-45a1-95b4-cda03aefe9de" path="/var/lib/kubelet/pods/e70949d6-004f-45a1-95b4-cda03aefe9de/volumes"
	Aug 31 22:21:39 addons-132210 kubelet[1197]: E0831 22:21:39.019683    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142899019247980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571173,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:21:39 addons-132210 kubelet[1197]: E0831 22:21:39.019868    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142899019247980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571173,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:21:41 addons-132210 kubelet[1197]: E0831 22:21:41.559668    1197 kuberuntime_container.go:691] "PreStop hook failed" err="command '/wait-shutdown' exited with 137: " pod="ingress-nginx/ingress-nginx-controller-bc57996ff-vtskh" podUID="e462e5dd-0936-4ad8-bbf2-8be4b08ede14" containerName="controller" containerID="cri-o://ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68"
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.826576    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e462e5dd-0936-4ad8-bbf2-8be4b08ede14-webhook-cert\") pod \"e462e5dd-0936-4ad8-bbf2-8be4b08ede14\" (UID: \"e462e5dd-0936-4ad8-bbf2-8be4b08ede14\") "
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.826635    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hczfj\" (UniqueName: \"kubernetes.io/projected/e462e5dd-0936-4ad8-bbf2-8be4b08ede14-kube-api-access-hczfj\") pod \"e462e5dd-0936-4ad8-bbf2-8be4b08ede14\" (UID: \"e462e5dd-0936-4ad8-bbf2-8be4b08ede14\") "
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.829017    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e462e5dd-0936-4ad8-bbf2-8be4b08ede14-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e462e5dd-0936-4ad8-bbf2-8be4b08ede14" (UID: "e462e5dd-0936-4ad8-bbf2-8be4b08ede14"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.829700    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e462e5dd-0936-4ad8-bbf2-8be4b08ede14-kube-api-access-hczfj" (OuterVolumeSpecName: "kube-api-access-hczfj") pod "e462e5dd-0936-4ad8-bbf2-8be4b08ede14" (UID: "e462e5dd-0936-4ad8-bbf2-8be4b08ede14"). InnerVolumeSpecName "kube-api-access-hczfj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.889797    1197 scope.go:117] "RemoveContainer" containerID="ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68"
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.910660    1197 scope.go:117] "RemoveContainer" containerID="ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68"
	Aug 31 22:21:41 addons-132210 kubelet[1197]: E0831 22:21:41.911218    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68\": container with ID starting with ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68 not found: ID does not exist" containerID="ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68"
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.911243    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68"} err="failed to get container status \"ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68\": rpc error: code = NotFound desc = could not find container \"ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68\": container with ID starting with ea07f9fc27ba412c4a7d6bf7542b2c9e18ca5905ae35039a0af2c52700624d68 not found: ID does not exist"
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.927704    1197 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e462e5dd-0936-4ad8-bbf2-8be4b08ede14-webhook-cert\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:21:41 addons-132210 kubelet[1197]: I0831 22:21:41.927742    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hczfj\" (UniqueName: \"kubernetes.io/projected/e462e5dd-0936-4ad8-bbf2-8be4b08ede14-kube-api-access-hczfj\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:21:42 addons-132210 kubelet[1197]: I0831 22:21:42.674292    1197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e462e5dd-0936-4ad8-bbf2-8be4b08ede14" path="/var/lib/kubelet/pods/e462e5dd-0936-4ad8-bbf2-8be4b08ede14/volumes"
	
	
	==> storage-provisioner [0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e] <==
	I0831 22:07:33.356182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:07:33.426579       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:07:33.426654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:07:33.847351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:07:33.848726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5!
	I0831 22:07:33.850075       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8a1f8e-16e7-4a54-81fb-1116caaffa55", APIVersion:"v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5 became leader
	I0831 22:07:33.951304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-132210 -n addons-132210
helpers_test.go:262: (dbg) Run:  kubectl --context addons-132210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-132210 describe pod busybox
helpers_test.go:283: (dbg) kubectl --context addons-132210 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-132210/192.168.39.12
	Start Time:       Sat, 31 Aug 2024 22:09:59 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzs9l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wzs9l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-132210
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m56s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    102s (x44 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:286: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (157.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (286.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.557203ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:345: "metrics-server-84c5f94fbc-4mp2p" [9f5c8bca-8c7c-4216-b875-066e9a9fb36a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004354982s
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (81.734057ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 11m1.267534696s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (64.407418ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 11m4.302391415s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (69.881142ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 11m9.425178907s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (63.314675ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 11m17.614694398s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (62.316404ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 11m27.083589194s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (65.191518ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 11m49.444954487s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (60.472181ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 12m17.0361662s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (60.686908ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 13m2.400301781s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (62.472834ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 13m31.904910667s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (73.680945ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 14m12.325320732s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (60.588871ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 14m53.836971793s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-132210 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-132210 top pods -n kube-system: exit status 1 (61.222716ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fg5wn, age: 15m39.397898606s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-132210 -n addons-132210
helpers_test.go:245: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 logs -n 25: (1.440149329s)
helpers_test.go:253: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-160287                                                                     | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-777221                                                                     | download-only-777221 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-465268 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | binary-mirror-465268                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45273                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-465268                                                                     | binary-mirror-465268 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-132210 --wait=true                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-132210 ssh cat                                                                       | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | /opt/local-path-provisioner/pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-132210 addons                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-132210 addons                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | addons-132210                                                                               |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | -p addons-132210                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-132210 ip                                                                            | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | -p addons-132210                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-132210 ssh curl -s                                                                   | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-132210 ip                                                                            | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:21 UTC | 31 Aug 24 22:21 UTC |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:21 UTC | 31 Aug 24 22:21 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-132210 addons disable                                                                | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:21 UTC | 31 Aug 24 22:21 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-132210 addons                                                                        | addons-132210        | jenkins | v1.33.1 | 31 Aug 24 22:23 UTC | 31 Aug 24 22:23 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:37.544876   21098 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:37.545155   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:37.545165   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:37.545172   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:37.545383   21098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:06:37.545946   21098 out.go:352] Setting JSON to false
	I0831 22:06:37.546798   21098 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2945,"bootTime":1725139053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:06:37.546859   21098 start.go:139] virtualization: kvm guest
	I0831 22:06:37.548701   21098 out.go:177] * [addons-132210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:06:37.550111   21098 notify.go:220] Checking for updates...
	I0831 22:06:37.550129   21098 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:06:37.551500   21098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:37.552938   21098 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:06:37.554280   21098 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:37.555749   21098 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:06:37.557091   21098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:06:37.558401   21098 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:37.589360   21098 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 22:06:37.590841   21098 start.go:297] selected driver: kvm2
	I0831 22:06:37.590856   21098 start.go:901] validating driver "kvm2" against <nil>
	I0831 22:06:37.590868   21098 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:06:37.591824   21098 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:37.591929   21098 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:06:37.606642   21098 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:06:37.606704   21098 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:37.606922   21098 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:06:37.606953   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:06:37.606960   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:06:37.606967   21098 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:37.607020   21098 start.go:340] cluster config:
	{Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:37.607103   21098 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:37.608999   21098 out.go:177] * Starting "addons-132210" primary control-plane node in "addons-132210" cluster
	I0831 22:06:37.610406   21098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:06:37.610441   21098 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:06:37.610451   21098 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:37.610537   21098 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:06:37.610551   21098 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:06:37.610893   21098 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json ...
	I0831 22:06:37.610917   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json: {Name:mk700584d59ad42df80709b4fc4c500ed7306a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:37.611077   21098 start.go:360] acquireMachinesLock for addons-132210: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:06:37.611133   21098 start.go:364] duration metric: took 40.383µs to acquireMachinesLock for "addons-132210"
	I0831 22:06:37.611156   21098 start.go:93] Provisioning new machine with config: &{Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:06:37.611223   21098 start.go:125] createHost starting for "" (driver="kvm2")
	I0831 22:06:37.613166   21098 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0831 22:06:37.613301   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:06:37.613345   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:06:37.627241   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0831 22:06:37.627637   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:06:37.628132   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:06:37.628166   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:06:37.628421   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:06:37.628636   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:06:37.628770   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:06:37.628882   21098 start.go:159] libmachine.API.Create for "addons-132210" (driver="kvm2")
	I0831 22:06:37.628903   21098 client.go:168] LocalClient.Create starting
	I0831 22:06:37.628944   21098 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:06:37.824136   21098 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:06:38.014796   21098 main.go:141] libmachine: Running pre-create checks...
	I0831 22:06:38.014823   21098 main.go:141] libmachine: (addons-132210) Calling .PreCreateCheck
	I0831 22:06:38.015353   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:06:38.015789   21098 main.go:141] libmachine: Creating machine...
	I0831 22:06:38.015803   21098 main.go:141] libmachine: (addons-132210) Calling .Create
	I0831 22:06:38.015942   21098 main.go:141] libmachine: (addons-132210) Creating KVM machine...
	I0831 22:06:38.017102   21098 main.go:141] libmachine: (addons-132210) DBG | found existing default KVM network
	I0831 22:06:38.017881   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.017718   21120 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0831 22:06:38.017904   21098 main.go:141] libmachine: (addons-132210) DBG | created network xml: 
	I0831 22:06:38.017916   21098 main.go:141] libmachine: (addons-132210) DBG | <network>
	I0831 22:06:38.017928   21098 main.go:141] libmachine: (addons-132210) DBG |   <name>mk-addons-132210</name>
	I0831 22:06:38.017940   21098 main.go:141] libmachine: (addons-132210) DBG |   <dns enable='no'/>
	I0831 22:06:38.017950   21098 main.go:141] libmachine: (addons-132210) DBG |   
	I0831 22:06:38.017970   21098 main.go:141] libmachine: (addons-132210) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0831 22:06:38.017978   21098 main.go:141] libmachine: (addons-132210) DBG |     <dhcp>
	I0831 22:06:38.017991   21098 main.go:141] libmachine: (addons-132210) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0831 22:06:38.018001   21098 main.go:141] libmachine: (addons-132210) DBG |     </dhcp>
	I0831 22:06:38.018013   21098 main.go:141] libmachine: (addons-132210) DBG |   </ip>
	I0831 22:06:38.018023   21098 main.go:141] libmachine: (addons-132210) DBG |   
	I0831 22:06:38.018033   21098 main.go:141] libmachine: (addons-132210) DBG | </network>
	I0831 22:06:38.018046   21098 main.go:141] libmachine: (addons-132210) DBG | 
	I0831 22:06:38.023383   21098 main.go:141] libmachine: (addons-132210) DBG | trying to create private KVM network mk-addons-132210 192.168.39.0/24...
	I0831 22:06:38.089434   21098 main.go:141] libmachine: (addons-132210) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 ...
	I0831 22:06:38.089471   21098 main.go:141] libmachine: (addons-132210) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:06:38.089479   21098 main.go:141] libmachine: (addons-132210) DBG | private KVM network mk-addons-132210 192.168.39.0/24 created
	I0831 22:06:38.089493   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.089368   21120 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:38.089534   21098 main.go:141] libmachine: (addons-132210) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:06:38.337644   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.337536   21120 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa...
	I0831 22:06:38.706397   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.706261   21120 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/addons-132210.rawdisk...
	I0831 22:06:38.706425   21098 main.go:141] libmachine: (addons-132210) DBG | Writing magic tar header
	I0831 22:06:38.706435   21098 main.go:141] libmachine: (addons-132210) DBG | Writing SSH key tar header
	I0831 22:06:38.706447   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:38.706368   21120 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 ...
	I0831 22:06:38.706460   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210
	I0831 22:06:38.706528   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210 (perms=drwx------)
	I0831 22:06:38.706557   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:06:38.706570   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:06:38.706579   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:38.706596   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:06:38.706607   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:06:38.706621   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:06:38.706633   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:06:38.706649   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:06:38.706662   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:06:38.706672   21098 main.go:141] libmachine: (addons-132210) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:06:38.706683   21098 main.go:141] libmachine: (addons-132210) Creating domain...
	I0831 22:06:38.706692   21098 main.go:141] libmachine: (addons-132210) DBG | Checking permissions on dir: /home
	I0831 22:06:38.706704   21098 main.go:141] libmachine: (addons-132210) DBG | Skipping /home - not owner
	I0831 22:06:38.707726   21098 main.go:141] libmachine: (addons-132210) define libvirt domain using xml: 
	I0831 22:06:38.707749   21098 main.go:141] libmachine: (addons-132210) <domain type='kvm'>
	I0831 22:06:38.707757   21098 main.go:141] libmachine: (addons-132210)   <name>addons-132210</name>
	I0831 22:06:38.707766   21098 main.go:141] libmachine: (addons-132210)   <memory unit='MiB'>4000</memory>
	I0831 22:06:38.707792   21098 main.go:141] libmachine: (addons-132210)   <vcpu>2</vcpu>
	I0831 22:06:38.707816   21098 main.go:141] libmachine: (addons-132210)   <features>
	I0831 22:06:38.707830   21098 main.go:141] libmachine: (addons-132210)     <acpi/>
	I0831 22:06:38.707843   21098 main.go:141] libmachine: (addons-132210)     <apic/>
	I0831 22:06:38.707865   21098 main.go:141] libmachine: (addons-132210)     <pae/>
	I0831 22:06:38.707885   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.707895   21098 main.go:141] libmachine: (addons-132210)   </features>
	I0831 22:06:38.707905   21098 main.go:141] libmachine: (addons-132210)   <cpu mode='host-passthrough'>
	I0831 22:06:38.707915   21098 main.go:141] libmachine: (addons-132210)   
	I0831 22:06:38.707924   21098 main.go:141] libmachine: (addons-132210)   </cpu>
	I0831 22:06:38.707929   21098 main.go:141] libmachine: (addons-132210)   <os>
	I0831 22:06:38.707936   21098 main.go:141] libmachine: (addons-132210)     <type>hvm</type>
	I0831 22:06:38.707942   21098 main.go:141] libmachine: (addons-132210)     <boot dev='cdrom'/>
	I0831 22:06:38.707948   21098 main.go:141] libmachine: (addons-132210)     <boot dev='hd'/>
	I0831 22:06:38.707954   21098 main.go:141] libmachine: (addons-132210)     <bootmenu enable='no'/>
	I0831 22:06:38.707960   21098 main.go:141] libmachine: (addons-132210)   </os>
	I0831 22:06:38.707966   21098 main.go:141] libmachine: (addons-132210)   <devices>
	I0831 22:06:38.707975   21098 main.go:141] libmachine: (addons-132210)     <disk type='file' device='cdrom'>
	I0831 22:06:38.708007   21098 main.go:141] libmachine: (addons-132210)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/boot2docker.iso'/>
	I0831 22:06:38.708027   21098 main.go:141] libmachine: (addons-132210)       <target dev='hdc' bus='scsi'/>
	I0831 22:06:38.708034   21098 main.go:141] libmachine: (addons-132210)       <readonly/>
	I0831 22:06:38.708039   21098 main.go:141] libmachine: (addons-132210)     </disk>
	I0831 22:06:38.708051   21098 main.go:141] libmachine: (addons-132210)     <disk type='file' device='disk'>
	I0831 22:06:38.708065   21098 main.go:141] libmachine: (addons-132210)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:06:38.708082   21098 main.go:141] libmachine: (addons-132210)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/addons-132210.rawdisk'/>
	I0831 22:06:38.708092   21098 main.go:141] libmachine: (addons-132210)       <target dev='hda' bus='virtio'/>
	I0831 22:06:38.708106   21098 main.go:141] libmachine: (addons-132210)     </disk>
	I0831 22:06:38.708123   21098 main.go:141] libmachine: (addons-132210)     <interface type='network'>
	I0831 22:06:38.708137   21098 main.go:141] libmachine: (addons-132210)       <source network='mk-addons-132210'/>
	I0831 22:06:38.708149   21098 main.go:141] libmachine: (addons-132210)       <model type='virtio'/>
	I0831 22:06:38.708162   21098 main.go:141] libmachine: (addons-132210)     </interface>
	I0831 22:06:38.708173   21098 main.go:141] libmachine: (addons-132210)     <interface type='network'>
	I0831 22:06:38.708181   21098 main.go:141] libmachine: (addons-132210)       <source network='default'/>
	I0831 22:06:38.708190   21098 main.go:141] libmachine: (addons-132210)       <model type='virtio'/>
	I0831 22:06:38.708213   21098 main.go:141] libmachine: (addons-132210)     </interface>
	I0831 22:06:38.708228   21098 main.go:141] libmachine: (addons-132210)     <serial type='pty'>
	I0831 22:06:38.708239   21098 main.go:141] libmachine: (addons-132210)       <target port='0'/>
	I0831 22:06:38.708252   21098 main.go:141] libmachine: (addons-132210)     </serial>
	I0831 22:06:38.708262   21098 main.go:141] libmachine: (addons-132210)     <console type='pty'>
	I0831 22:06:38.708276   21098 main.go:141] libmachine: (addons-132210)       <target type='serial' port='0'/>
	I0831 22:06:38.708292   21098 main.go:141] libmachine: (addons-132210)     </console>
	I0831 22:06:38.708304   21098 main.go:141] libmachine: (addons-132210)     <rng model='virtio'>
	I0831 22:06:38.708316   21098 main.go:141] libmachine: (addons-132210)       <backend model='random'>/dev/random</backend>
	I0831 22:06:38.708328   21098 main.go:141] libmachine: (addons-132210)     </rng>
	I0831 22:06:38.708338   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.708349   21098 main.go:141] libmachine: (addons-132210)     
	I0831 22:06:38.708362   21098 main.go:141] libmachine: (addons-132210)   </devices>
	I0831 22:06:38.708377   21098 main.go:141] libmachine: (addons-132210) </domain>
	I0831 22:06:38.708386   21098 main.go:141] libmachine: (addons-132210) 
	I0831 22:06:38.714749   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:04:9d:ea in network default
	I0831 22:06:38.715229   21098 main.go:141] libmachine: (addons-132210) Ensuring networks are active...
	I0831 22:06:38.715251   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:38.715857   21098 main.go:141] libmachine: (addons-132210) Ensuring network default is active
	I0831 22:06:38.716174   21098 main.go:141] libmachine: (addons-132210) Ensuring network mk-addons-132210 is active
	I0831 22:06:38.716662   21098 main.go:141] libmachine: (addons-132210) Getting domain xml...
	I0831 22:06:38.717336   21098 main.go:141] libmachine: (addons-132210) Creating domain...
	I0831 22:06:40.114794   21098 main.go:141] libmachine: (addons-132210) Waiting to get IP...
	I0831 22:06:40.115527   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.115799   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.115829   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.115776   21120 retry.go:31] will retry after 204.646064ms: waiting for machine to come up
	I0831 22:06:40.322141   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.322530   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.322561   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.322474   21120 retry.go:31] will retry after 367.388706ms: waiting for machine to come up
	I0831 22:06:40.691020   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:40.691359   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:40.691385   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:40.691306   21120 retry.go:31] will retry after 449.926201ms: waiting for machine to come up
	I0831 22:06:41.142806   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:41.143371   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:41.143398   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:41.143199   21120 retry.go:31] will retry after 411.198107ms: waiting for machine to come up
	I0831 22:06:41.555507   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:41.556022   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:41.556044   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:41.555945   21120 retry.go:31] will retry after 684.989531ms: waiting for machine to come up
	I0831 22:06:42.242958   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:42.243440   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:42.243461   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:42.243416   21120 retry.go:31] will retry after 922.263131ms: waiting for machine to come up
	I0831 22:06:43.167145   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:43.167604   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:43.167629   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:43.167554   21120 retry.go:31] will retry after 879.584878ms: waiting for machine to come up
	I0831 22:06:44.048638   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:44.048976   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:44.048997   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:44.048933   21120 retry.go:31] will retry after 1.427746455s: waiting for machine to come up
	I0831 22:06:45.478039   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:45.478640   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:45.478666   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:45.478603   21120 retry.go:31] will retry after 1.190362049s: waiting for machine to come up
	I0831 22:06:46.671043   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:46.671501   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:46.671530   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:46.671448   21120 retry.go:31] will retry after 2.196766808s: waiting for machine to come up
	I0831 22:06:48.869585   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:48.870037   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:48.870059   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:48.869999   21120 retry.go:31] will retry after 2.216870251s: waiting for machine to come up
	I0831 22:06:51.089344   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:51.089783   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:51.089804   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:51.089726   21120 retry.go:31] will retry after 3.489292564s: waiting for machine to come up
	I0831 22:06:54.581936   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:54.582398   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:54.582426   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:54.582313   21120 retry.go:31] will retry after 2.860598857s: waiting for machine to come up
	I0831 22:06:57.446192   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:06:57.446589   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find current IP address of domain addons-132210 in network mk-addons-132210
	I0831 22:06:57.446614   21098 main.go:141] libmachine: (addons-132210) DBG | I0831 22:06:57.446501   21120 retry.go:31] will retry after 4.269318205s: waiting for machine to come up
	I0831 22:07:01.720788   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.721275   21098 main.go:141] libmachine: (addons-132210) Found IP for machine: 192.168.39.12
	I0831 22:07:01.721302   21098 main.go:141] libmachine: (addons-132210) Reserving static IP address...
	I0831 22:07:01.721320   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has current primary IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.721673   21098 main.go:141] libmachine: (addons-132210) DBG | unable to find host DHCP lease matching {name: "addons-132210", mac: "52:54:00:35:a4:57", ip: "192.168.39.12"} in network mk-addons-132210
	I0831 22:07:01.793692   21098 main.go:141] libmachine: (addons-132210) DBG | Getting to WaitForSSH function...
	I0831 22:07:01.793719   21098 main.go:141] libmachine: (addons-132210) Reserved static IP address: 192.168.39.12
	I0831 22:07:01.793733   21098 main.go:141] libmachine: (addons-132210) Waiting for SSH to be available...
	I0831 22:07:01.796008   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.796380   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:01.796413   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.796552   21098 main.go:141] libmachine: (addons-132210) DBG | Using SSH client type: external
	I0831 22:07:01.796581   21098 main.go:141] libmachine: (addons-132210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa (-rw-------)
	I0831 22:07:01.796618   21098 main.go:141] libmachine: (addons-132210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:07:01.796631   21098 main.go:141] libmachine: (addons-132210) DBG | About to run SSH command:
	I0831 22:07:01.796665   21098 main.go:141] libmachine: (addons-132210) DBG | exit 0
	I0831 22:07:01.927398   21098 main.go:141] libmachine: (addons-132210) DBG | SSH cmd err, output: <nil>: 
	I0831 22:07:01.927709   21098 main.go:141] libmachine: (addons-132210) KVM machine creation complete!
	I0831 22:07:01.928053   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:07:01.928588   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:01.928805   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:01.928982   21098 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:07:01.928996   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:01.930232   21098 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:07:01.930250   21098 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:07:01.930278   21098 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:07:01.930291   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:01.932160   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.932434   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:01.932466   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:01.932569   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:01.932748   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:01.932899   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:01.933022   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:01.933173   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:01.933359   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:01.933371   21098 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:07:02.030631   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:07:02.030654   21098 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:07:02.030661   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.033292   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.033728   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.033761   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.033978   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.034178   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.034350   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.034509   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.034664   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.034840   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.034854   21098 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:07:02.136244   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:07:02.136350   21098 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:07:02.136362   21098 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:07:02.136370   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.136633   21098 buildroot.go:166] provisioning hostname "addons-132210"
	I0831 22:07:02.136653   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.136838   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.139916   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.140414   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.140447   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.140679   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.140892   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.141063   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.141293   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.141484   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.141657   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.141672   21098 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-132210 && echo "addons-132210" | sudo tee /etc/hostname
	I0831 22:07:02.253631   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-132210
	
	I0831 22:07:02.253688   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.256261   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.256636   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.256662   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.256793   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.256965   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.257118   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.257266   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.257410   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.257558   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.257579   21098 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-132210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-132210/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-132210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:07:02.369069   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:07:02.369101   21098 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:07:02.369138   21098 buildroot.go:174] setting up certificates
	I0831 22:07:02.369148   21098 provision.go:84] configureAuth start
	I0831 22:07:02.369159   21098 main.go:141] libmachine: (addons-132210) Calling .GetMachineName
	I0831 22:07:02.369509   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:02.372462   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.372743   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.372769   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.372894   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.375363   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.375809   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.375831   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.376027   21098 provision.go:143] copyHostCerts
	I0831 22:07:02.376110   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:07:02.376256   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:07:02.376417   21098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:07:02.376622   21098 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.addons-132210 san=[127.0.0.1 192.168.39.12 addons-132210 localhost minikube]
	I0831 22:07:02.529409   21098 provision.go:177] copyRemoteCerts
	I0831 22:07:02.529465   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:07:02.529485   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.531858   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.532087   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.532145   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.532288   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.532439   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.532600   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.532744   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:02.614769   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:07:02.640733   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:07:02.666643   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:07:02.692178   21098 provision.go:87] duration metric: took 323.018181ms to configureAuth
	I0831 22:07:02.692206   21098 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:07:02.692406   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:02.692494   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.695406   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.695687   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.695718   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.695909   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.696178   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.696371   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.696472   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.696596   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:02.696771   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:02.696792   21098 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:07:02.919512   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:07:02.919537   21098 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:07:02.919546   21098 main.go:141] libmachine: (addons-132210) Calling .GetURL
	I0831 22:07:02.920835   21098 main.go:141] libmachine: (addons-132210) DBG | Using libvirt version 6000000
	I0831 22:07:02.923016   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.923361   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.923391   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.923525   21098 main.go:141] libmachine: Docker is up and running!
	I0831 22:07:02.923543   21098 main.go:141] libmachine: Reticulating splines...
	I0831 22:07:02.923552   21098 client.go:171] duration metric: took 25.29463901s to LocalClient.Create
	I0831 22:07:02.923574   21098 start.go:167] duration metric: took 25.294693611s to libmachine.API.Create "addons-132210"
	I0831 22:07:02.923584   21098 start.go:293] postStartSetup for "addons-132210" (driver="kvm2")
	I0831 22:07:02.923593   21098 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:07:02.923609   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:02.923852   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:07:02.923871   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:02.925703   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.926011   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:02.926030   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:02.926155   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:02.926317   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:02.926442   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:02.926556   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.006717   21098 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:07:03.011232   21098 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:07:03.011262   21098 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:07:03.011362   21098 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:07:03.011394   21098 start.go:296] duration metric: took 87.804145ms for postStartSetup
	I0831 22:07:03.011427   21098 main.go:141] libmachine: (addons-132210) Calling .GetConfigRaw
	I0831 22:07:03.012028   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:03.014629   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.014960   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.014988   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.015270   21098 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/config.json ...
	I0831 22:07:03.015499   21098 start.go:128] duration metric: took 25.404265309s to createHost
	I0831 22:07:03.015523   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.017928   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.018268   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.018291   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.018500   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.018686   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.018822   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.018966   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.019111   21098 main.go:141] libmachine: Using SSH client type: native
	I0831 22:07:03.019276   21098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0831 22:07:03.019286   21098 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:07:03.120128   21098 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725142023.097010301
	
	I0831 22:07:03.120147   21098 fix.go:216] guest clock: 1725142023.097010301
	I0831 22:07:03.120190   21098 fix.go:229] Guest: 2024-08-31 22:07:03.097010301 +0000 UTC Remote: 2024-08-31 22:07:03.015511488 +0000 UTC m=+25.502821103 (delta=81.498813ms)
	I0831 22:07:03.120212   21098 fix.go:200] guest clock delta is within tolerance: 81.498813ms
	I0831 22:07:03.120217   21098 start.go:83] releasing machines lock for "addons-132210", held for 25.509073174s
	I0831 22:07:03.120236   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.120504   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:03.123087   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.123415   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.123439   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.123594   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124139   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124328   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:03.124419   21098 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:07:03.124455   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.124550   21098 ssh_runner.go:195] Run: cat /version.json
	I0831 22:07:03.124566   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:03.127123   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127348   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127456   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.127478   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127620   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.127797   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:03.127815   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:03.127860   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.127949   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:03.128037   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.128111   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:03.128172   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.128232   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:03.128351   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:03.200298   21098 ssh_runner.go:195] Run: systemctl --version
	I0831 22:07:03.227274   21098 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:07:03.385642   21098 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:07:03.391833   21098 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:07:03.391895   21098 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:07:03.410079   21098 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:07:03.410103   21098 start.go:495] detecting cgroup driver to use...
	I0831 22:07:03.410164   21098 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:07:03.427440   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:07:03.442818   21098 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:07:03.442873   21098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:07:03.457961   21098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:07:03.472688   21098 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:07:03.587297   21098 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:07:03.750451   21098 docker.go:233] disabling docker service ...
	I0831 22:07:03.750529   21098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:07:03.765720   21098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:07:03.779301   21098 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:07:03.904389   21098 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:07:04.017402   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:07:04.032166   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:07:04.050757   21098 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:07:04.050832   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.061287   21098 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:07:04.061357   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.071771   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.082266   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.092904   21098 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:07:04.103797   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.114937   21098 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.132389   21098 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:07:04.142812   21098 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:07:04.152012   21098 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:07:04.152067   21098 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:07:04.165405   21098 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:07:04.174718   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:04.283822   21098 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:07:04.383793   21098 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:07:04.383893   21098 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:07:04.388685   21098 start.go:563] Will wait 60s for crictl version
	I0831 22:07:04.388753   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:07:04.392620   21098 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:07:04.444477   21098 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:07:04.444598   21098 ssh_runner.go:195] Run: crio --version
	I0831 22:07:04.473736   21098 ssh_runner.go:195] Run: crio --version
	I0831 22:07:04.503698   21098 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:07:04.505075   21098 main.go:141] libmachine: (addons-132210) Calling .GetIP
	I0831 22:07:04.507671   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:04.508005   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:04.508029   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:04.508213   21098 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:07:04.512325   21098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:07:04.525355   21098 kubeadm.go:883] updating cluster {Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:07:04.525461   21098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:07:04.525500   21098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:07:04.558664   21098 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0831 22:07:04.558743   21098 ssh_runner.go:195] Run: which lz4
	I0831 22:07:04.562947   21098 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 22:07:04.567112   21098 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 22:07:04.567139   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0831 22:07:05.903076   21098 crio.go:462] duration metric: took 1.340167325s to copy over tarball
	I0831 22:07:05.903140   21098 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 22:07:08.148415   21098 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.245250117s)
	I0831 22:07:08.148446   21098 crio.go:469] duration metric: took 2.245343942s to extract the tarball
	I0831 22:07:08.148455   21098 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 22:07:08.185382   21098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:07:08.228652   21098 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:07:08.228676   21098 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:07:08.228684   21098 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.31.0 crio true true} ...
	I0831 22:07:08.228785   21098 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-132210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:07:08.228868   21098 ssh_runner.go:195] Run: crio config
	I0831 22:07:08.272478   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:07:08.272508   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:07:08.272527   21098 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:07:08.272550   21098 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-132210 NodeName:addons-132210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:07:08.272727   21098 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-132210"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:07:08.272797   21098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:07:08.282654   21098 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:07:08.282722   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:07:08.292061   21098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:07:08.308679   21098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:07:08.324837   21098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0831 22:07:08.341642   21098 ssh_runner.go:195] Run: grep 192.168.39.12	control-plane.minikube.internal$ /etc/hosts
	I0831 22:07:08.345567   21098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:07:08.357961   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:08.466928   21098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:08.482753   21098 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210 for IP: 192.168.39.12
	I0831 22:07:08.482776   21098 certs.go:194] generating shared ca certs ...
	I0831 22:07:08.482790   21098 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.482937   21098 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:07:08.597311   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt ...
	I0831 22:07:08.597339   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt: {Name:mkfc4c408c230132bbe7fe213eeea10a6827c0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.597509   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key ...
	I0831 22:07:08.597520   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key: {Name:mkd43af6d176eb1599961c21c4cf9cd0b89179f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.597585   21098 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:07:08.724372   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt ...
	I0831 22:07:08.724403   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt: {Name:mk9535d600107772240a5a04a39fba46922be0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.724563   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key ...
	I0831 22:07:08.724574   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key: {Name:mkde040c84f81ae9d500962d5b2c7d3a71ca66c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.724640   21098 certs.go:256] generating profile certs ...
	I0831 22:07:08.724688   21098 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key
	I0831 22:07:08.724702   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt with IP's: []
	I0831 22:07:08.875287   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt ...
	I0831 22:07:08.875314   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: {Name:mk5db0031ee87d851d15425d75d7b2faf9a2a074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.875490   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key ...
	I0831 22:07:08.875501   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.key: {Name:mk19417e85915a2da4d854ab40b604380b362ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.875569   21098 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573
	I0831 22:07:08.875586   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12]
	I0831 22:07:08.931384   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 ...
	I0831 22:07:08.931413   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573: {Name:mk348633e181ba1f2f701144ddd9247b046d96ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.931554   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573 ...
	I0831 22:07:08.931567   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573: {Name:mk786aa380be6f62aca47aa829b55a6abecc88d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.931632   21098 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt.b6a6f573 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt
	I0831 22:07:08.931712   21098 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key.b6a6f573 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key
	I0831 22:07:08.931760   21098 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key
	I0831 22:07:08.931777   21098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt with IP's: []
	I0831 22:07:08.977840   21098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt ...
	I0831 22:07:08.977870   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt: {Name:mk26c70606574ad0633e48cf1995428b32594850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.978036   21098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key ...
	I0831 22:07:08.978047   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key: {Name:mk7a0020fb4b16382f09b75c285c938b4e52843a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:08.978220   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:07:08.978258   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:07:08.978282   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:07:08.978303   21098 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:07:08.978949   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:07:09.004455   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:07:09.029604   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:07:09.053313   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:07:09.077554   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:07:09.102196   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:07:09.127069   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:07:09.153769   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:07:09.180539   21098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:07:09.206167   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:07:09.224663   21098 ssh_runner.go:195] Run: openssl version
	I0831 22:07:09.230496   21098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:07:09.241375   21098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.246377   21098 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.246454   21098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:07:09.252587   21098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:07:09.263592   21098 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:07:09.267795   21098 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:07:09.267846   21098 kubeadm.go:392] StartCluster: {Name:addons-132210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-132210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:07:09.267917   21098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:07:09.267965   21098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:07:09.309105   21098 cri.go:89] found id: ""
	I0831 22:07:09.309176   21098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:07:09.319285   21098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:07:09.333293   21098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:07:09.348394   21098 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:07:09.348414   21098 kubeadm.go:157] found existing configuration files:
	
	I0831 22:07:09.348466   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:07:09.358972   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:07:09.359049   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:07:09.370609   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:07:09.382278   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:07:09.382347   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:07:09.393363   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:07:09.403425   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:07:09.403501   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:07:09.414483   21098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:07:09.425120   21098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:07:09.425188   21098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:07:09.436044   21098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 22:07:09.489573   21098 kubeadm.go:310] W0831 22:07:09.473217     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:07:09.490547   21098 kubeadm.go:310] W0831 22:07:09.474222     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:07:09.600273   21098 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:07:19.334217   21098 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:07:19.334291   21098 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:07:19.334389   21098 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:07:19.334542   21098 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:07:19.334652   21098 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:07:19.334708   21098 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:07:19.336431   21098 out.go:235]   - Generating certificates and keys ...
	I0831 22:07:19.336518   21098 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:07:19.336608   21098 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:07:19.336691   21098 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:07:19.336759   21098 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:07:19.336849   21098 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:07:19.336925   21098 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:07:19.337003   21098 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:07:19.337137   21098 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-132210 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0831 22:07:19.337224   21098 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:07:19.337376   21098 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-132210 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0831 22:07:19.337459   21098 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:07:19.337525   21098 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:07:19.337585   21098 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:07:19.337668   21098 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:07:19.337742   21098 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:07:19.337831   21098 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:07:19.337921   21098 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:07:19.338006   21098 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:07:19.338077   21098 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:07:19.338185   21098 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:07:19.338278   21098 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:07:19.340682   21098 out.go:235]   - Booting up control plane ...
	I0831 22:07:19.340798   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:07:19.340931   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:07:19.341031   21098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:07:19.341176   21098 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:07:19.341298   21098 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:07:19.341358   21098 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:07:19.341525   21098 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:07:19.341674   21098 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:07:19.341768   21098 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001861689s
	I0831 22:07:19.341842   21098 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:07:19.341928   21098 kubeadm.go:310] [api-check] The API server is healthy after 5.002243064s
	I0831 22:07:19.342094   21098 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:07:19.342281   21098 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:07:19.342371   21098 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:07:19.342560   21098 kubeadm.go:310] [mark-control-plane] Marking the node addons-132210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:07:19.342651   21098 kubeadm.go:310] [bootstrap-token] Using token: tds7o0.8p21t51ubuabfjmq
	I0831 22:07:19.344005   21098 out.go:235]   - Configuring RBAC rules ...
	I0831 22:07:19.344099   21098 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:07:19.344192   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:07:19.344360   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:07:19.344510   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:07:19.344781   21098 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:07:19.344861   21098 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:07:19.344973   21098 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:07:19.345017   21098 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:07:19.345057   21098 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:07:19.345063   21098 kubeadm.go:310] 
	I0831 22:07:19.345111   21098 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:07:19.345117   21098 kubeadm.go:310] 
	I0831 22:07:19.345211   21098 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:07:19.345219   21098 kubeadm.go:310] 
	I0831 22:07:19.345240   21098 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:07:19.345289   21098 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:07:19.345334   21098 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:07:19.345340   21098 kubeadm.go:310] 
	I0831 22:07:19.345393   21098 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:07:19.345401   21098 kubeadm.go:310] 
	I0831 22:07:19.345443   21098 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:07:19.345452   21098 kubeadm.go:310] 
	I0831 22:07:19.345503   21098 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:07:19.345607   21098 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:07:19.345685   21098 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:07:19.345695   21098 kubeadm.go:310] 
	I0831 22:07:19.345816   21098 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:07:19.345897   21098 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:07:19.345903   21098 kubeadm.go:310] 
	I0831 22:07:19.345969   21098 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tds7o0.8p21t51ubuabfjmq \
	I0831 22:07:19.346062   21098 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e \
	I0831 22:07:19.346084   21098 kubeadm.go:310] 	--control-plane 
	I0831 22:07:19.346090   21098 kubeadm.go:310] 
	I0831 22:07:19.346184   21098 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:07:19.346195   21098 kubeadm.go:310] 
	I0831 22:07:19.346266   21098 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tds7o0.8p21t51ubuabfjmq \
	I0831 22:07:19.346370   21098 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e 
	I0831 22:07:19.346389   21098 cni.go:84] Creating CNI manager for ""
	I0831 22:07:19.346398   21098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:07:19.347902   21098 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 22:07:19.348984   21098 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 22:07:19.359846   21098 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 22:07:19.378926   21098 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:07:19.378983   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:19.379028   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-132210 minikube.k8s.io/updated_at=2024_08_31T22_07_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-132210 minikube.k8s.io/primary=true
	I0831 22:07:19.505912   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:19.528337   21098 ops.go:34] apiserver oom_adj: -16
	I0831 22:07:20.006130   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:20.506049   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:21.006229   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:21.506568   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:22.006961   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:22.506496   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.006336   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.506858   21098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:23.585460   21098 kubeadm.go:1113] duration metric: took 4.206527831s to wait for elevateKubeSystemPrivileges
	I0831 22:07:23.585486   21098 kubeadm.go:394] duration metric: took 14.317645494s to StartCluster
	I0831 22:07:23.585502   21098 settings.go:142] acquiring lock: {Name:mkec6b4f5d3301688503002977bc4d63aab7adcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:23.585612   21098 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:07:23.585914   21098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/kubeconfig: {Name:mkc6d6b60cc62b336d228fe4b49e098aa4d94f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:23.586102   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:07:23.586108   21098 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:07:23.586191   21098 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:07:23.586284   21098 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-132210"
	I0831 22:07:23.586294   21098 addons.go:69] Setting default-storageclass=true in profile "addons-132210"
	I0831 22:07:23.586299   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:23.586295   21098 addons.go:69] Setting cloud-spanner=true in profile "addons-132210"
	I0831 22:07:23.586317   21098 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-132210"
	I0831 22:07:23.586338   21098 addons.go:234] Setting addon cloud-spanner=true in "addons-132210"
	I0831 22:07:23.586334   21098 addons.go:69] Setting metrics-server=true in profile "addons-132210"
	I0831 22:07:23.586358   21098 addons.go:69] Setting inspektor-gadget=true in profile "addons-132210"
	I0831 22:07:23.586370   21098 addons.go:69] Setting helm-tiller=true in profile "addons-132210"
	I0831 22:07:23.586379   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586382   21098 addons.go:234] Setting addon inspektor-gadget=true in "addons-132210"
	I0831 22:07:23.586383   21098 addons.go:69] Setting storage-provisioner=true in profile "addons-132210"
	I0831 22:07:23.586392   21098 addons.go:234] Setting addon helm-tiller=true in "addons-132210"
	I0831 22:07:23.586403   21098 addons.go:234] Setting addon storage-provisioner=true in "addons-132210"
	I0831 22:07:23.586413   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586423   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586433   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586686   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586728   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586770   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586804   21098 addons.go:69] Setting registry=true in profile "addons-132210"
	I0831 22:07:23.586813   21098 addons.go:69] Setting volumesnapshots=true in profile "addons-132210"
	I0831 22:07:23.586825   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586832   21098 addons.go:234] Setting addon registry=true in "addons-132210"
	I0831 22:07:23.586844   21098 addons.go:234] Setting addon volumesnapshots=true in "addons-132210"
	I0831 22:07:23.586855   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586283   21098 addons.go:69] Setting yakd=true in profile "addons-132210"
	I0831 22:07:23.586867   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586889   21098 addons.go:234] Setting addon yakd=true in "addons-132210"
	I0831 22:07:23.586916   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.586807   21098 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-132210"
	I0831 22:07:23.586988   21098 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-132210"
	I0831 22:07:23.587205   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587226   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587228   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586371   21098 addons.go:234] Setting addon metrics-server=true in "addons-132210"
	I0831 22:07:23.587269   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586360   21098 addons.go:69] Setting gcp-auth=true in profile "addons-132210"
	I0831 22:07:23.587294   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587296   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.587300   21098 mustload.go:65] Loading cluster: addons-132210
	I0831 22:07:23.587308   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587341   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587377   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586345   21098 addons.go:69] Setting ingress-dns=true in profile "addons-132210"
	I0831 22:07:23.587497   21098 config.go:182] Loaded profile config "addons-132210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:07:23.586770   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587534   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587643   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587679   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587497   21098 addons.go:234] Setting addon ingress-dns=true in "addons-132210"
	I0831 22:07:23.586789   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.587724   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.587760   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.586794   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.586854   21098 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-132210"
	I0831 22:07:23.586783   21098 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-132210"
	I0831 22:07:23.587810   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.587828   21098 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-132210"
	I0831 22:07:23.586798   21098 addons.go:69] Setting volcano=true in profile "addons-132210"
	I0831 22:07:23.587854   21098 addons.go:234] Setting addon volcano=true in "addons-132210"
	I0831 22:07:23.586331   21098 addons.go:69] Setting ingress=true in profile "addons-132210"
	I0831 22:07:23.587887   21098 addons.go:234] Setting addon ingress=true in "addons-132210"
	I0831 22:07:23.588117   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.588477   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.588503   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.588555   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.588574   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.588797   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.588819   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.589146   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.589162   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.589185   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.589230   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.589278   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.595405   21098 out.go:177] * Verifying Kubernetes components...
	I0831 22:07:23.599775   21098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:23.607898   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0831 22:07:23.608464   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
	I0831 22:07:23.608573   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0831 22:07:23.609061   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609163   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609490   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.609665   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.609681   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.609938   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.609953   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.610031   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610054   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.610072   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.610147   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0831 22:07:23.610474   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610549   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0831 22:07:23.610740   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.610794   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.610831   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.611018   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.611156   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.611170   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.611286   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.611299   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.611477   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.611618   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.611699   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.615775   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.615947   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.615974   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.616335   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.616370   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.621980   21098 addons.go:234] Setting addon default-storageclass=true in "addons-132210"
	I0831 22:07:23.622070   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.622457   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.622516   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.623860   21098 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-132210"
	I0831 22:07:23.623897   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.624221   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.624251   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.631854   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.615777   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.632193   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.615777   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.632797   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.632822   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0831 22:07:23.639452   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
	I0831 22:07:23.639483   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0831 22:07:23.640021   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.640140   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.640612   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.640631   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.640965   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.641062   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.641077   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.641147   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.641480   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.642095   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.642132   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.644079   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0831 22:07:23.644378   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.644778   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.644853   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.644867   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.644876   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:23.645175   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.645259   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.645287   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.645335   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I0831 22:07:23.645668   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.645683   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.645700   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.645993   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.646012   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.646152   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.646163   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.647040   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.647260   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.647648   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.647673   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.648054   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.649653   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.651862   21098 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:07:23.653359   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0831 22:07:23.653404   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:07:23.653419   21098 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:07:23.653443   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.653793   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.656591   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.657110   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.657148   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.657255   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.657289   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.657300   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.657746   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.657824   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.657895   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0831 22:07:23.657957   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.658358   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.658386   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.658390   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.658533   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.659277   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.659302   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.659683   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.659864   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.661487   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.663195   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:07:23.663288   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0831 22:07:23.663682   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.664270   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.664292   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.664416   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:07:23.664440   21098 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:07:23.664462   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.664598   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.665099   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.665137   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.668127   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.668154   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.668185   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.668378   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.668565   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.668732   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.668882   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.669430   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0831 22:07:23.669703   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.670101   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.670117   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.670405   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.671393   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.671430   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.672401   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0831 22:07:23.672405   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0831 22:07:23.672825   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.672904   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.673447   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.673475   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.673794   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.674020   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.674041   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.674092   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.674985   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.675528   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.675566   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.676624   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.678884   21098 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:07:23.680300   21098 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:23.680318   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:07:23.680341   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.681210   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0831 22:07:23.683715   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.683816   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0831 22:07:23.684416   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.684430   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.684488   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.684506   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.684593   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.684729   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.684885   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.684908   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.685078   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.685876   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.686073   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.686679   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0831 22:07:23.687155   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.687443   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.687626   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.687903   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.687917   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.688614   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.688628   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.688964   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.689489   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.689521   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.689640   21098 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:07:23.690115   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.690674   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.690712   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.690910   21098 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:23.690929   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:07:23.690949   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.693797   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.694203   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.694226   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.694378   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.694536   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.694652   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.694748   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.695907   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0831 22:07:23.696312   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.696776   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.696797   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.697094   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.697267   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.704189   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.704458   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0831 22:07:23.704894   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.705446   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.705465   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.705571   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0831 22:07:23.705976   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.706019   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:23.706276   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.706426   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.706438   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.706789   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.707335   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.707376   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.707662   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.708390   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0831 22:07:23.708421   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:23.708877   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.709389   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.709405   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.709467   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.709506   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44487
	I0831 22:07:23.709999   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.710056   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0831 22:07:23.710157   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0831 22:07:23.710455   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.710596   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.710831   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.710851   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.710876   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.710886   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:07:23.710934   21098 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:07:23.711123   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.711251   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.711467   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.711486   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.711519   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.712107   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.712202   21098 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:23.712222   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:07:23.712241   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.712501   21098 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:23.712517   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:07:23.712531   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.712669   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.712683   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.712710   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.712727   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37725
	I0831 22:07:23.712748   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.713405   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.713788   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.713855   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.714889   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.714908   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.715016   21098 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:07:23.715152   21098 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0831 22:07:23.715575   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.715816   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.716851   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.717255   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0831 22:07:23.717351   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.717594   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0831 22:07:23.717606   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0831 22:07:23.717622   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.718309   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.718412   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0831 22:07:23.718545   21098 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:07:23.718731   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.719156   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.719170   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.719236   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719258   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719522   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.719872   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.719904   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.719936   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.719954   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.720069   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.720084   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.720095   21098 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:07:23.720107   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:07:23.720130   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.720444   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.720568   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:23.720598   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:23.720724   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.720879   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.720934   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.720979   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.721048   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.721785   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.721873   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.722229   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.723401   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.723420   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.723449   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.723458   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0831 22:07:23.723466   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.723623   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.723671   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.723988   21098 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:23.723999   21098 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:07:23.724001   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.724033   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.724011   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.724695   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.724718   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.724889   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.725405   21098 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:07:23.725476   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.725493   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.725933   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.726224   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.726494   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:07:23.726505   21098 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:07:23.726517   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.727867   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.728730   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.728793   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.729260   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.729288   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730267   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:07:23.730375   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730404   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.730417   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.730471   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.730484   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.730629   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.730630   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.730777   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.730843   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.730978   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.731217   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.731708   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.731727   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.732701   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:07:23.733806   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0831 22:07:23.733914   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.734151   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.734236   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.734369   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.734573   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.734941   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.734955   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.735218   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:07:23.735423   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.735605   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.737637   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:07:23.737864   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.739670   21098 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:07:23.739673   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:07:23.740906   21098 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:23.740926   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:07:23.740944   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.742803   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:07:23.743050   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0831 22:07:23.743591   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.744134   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.744153   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.744225   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.744513   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.744683   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.744705   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.744736   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.744900   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.745356   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 22:07:23.745430   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.745580   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.745776   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.746229   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.746407   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:23.746416   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:23.746590   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:23.746598   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:23.746604   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:23.746609   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:23.746831   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:23.746844   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	W0831 22:07:23.746916   21098 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0831 22:07:23.748245   21098 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:07:23.749403   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:07:23.749426   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:07:23.749442   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.751103   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
	I0831 22:07:23.751505   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.751960   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.751972   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.752271   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.752468   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.752488   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.752879   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.752892   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.753179   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.753384   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.753544   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.753666   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:23.753967   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	W0831 22:07:23.754404   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53982->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.754423   21098 retry.go:31] will retry after 201.037828ms: ssh: handshake failed: read tcp 192.168.39.1:53982->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.755597   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0831 22:07:23.755767   21098 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:07:23.755970   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:23.756401   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:23.756422   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:23.756792   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:23.756966   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:23.757169   21098 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:07:23.757183   21098 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:07:23.757195   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.758339   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:23.759819   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.760016   21098 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:07:23.760235   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.760273   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.760417   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.760619   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:23.760786   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.760948   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	W0831 22:07:23.761568   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53984->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.761587   21098 retry.go:31] will retry after 339.775685ms: ssh: handshake failed: read tcp 192.168.39.1:53984->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.762678   21098 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:07:23.764273   21098 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:23.764290   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:07:23.764302   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:23.767265   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.767714   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:23.767737   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:23.768009   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:23.768256   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	W0831 22:07:23.768259   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53988->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.768311   21098 retry.go:31] will retry after 253.843102ms: ssh: handshake failed: read tcp 192.168.39.1:53988->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.768409   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:23.768516   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	W0831 22:07:23.769143   21098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53996->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:23.769159   21098 retry.go:31] will retry after 228.687708ms: ssh: handshake failed: read tcp 192.168.39.1:53996->192.168.39.12:22: read: connection reset by peer
	I0831 22:07:24.009671   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0831 22:07:24.009698   21098 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0831 22:07:24.035122   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:07:24.035143   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:07:24.096675   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:24.137383   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:07:24.137405   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:07:24.192363   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:24.208220   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:07:24.208244   21098 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:07:24.213758   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:24.294093   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:24.337682   21098 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:24.337708   21098 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0831 22:07:24.355787   21098 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:07:24.355811   21098 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:07:24.397120   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:07:24.397152   21098 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:07:24.399259   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:07:24.399283   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:07:24.402180   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:24.414440   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:07:24.414467   21098 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:07:24.448723   21098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:24.448889   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:07:24.517279   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:24.544228   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:07:24.544262   21098 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:07:24.582484   21098 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:07:24.582507   21098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:07:24.590888   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:07:24.616331   21098 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:24.616362   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:07:24.621087   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:07:24.621125   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:07:24.734564   21098 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:24.734588   21098 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:07:24.758600   21098 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:07:24.758627   21098 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:07:24.761196   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:24.842914   21098 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:24.842933   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:07:24.864484   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:07:24.864510   21098 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:07:24.881251   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:07:24.881275   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:07:24.905038   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:24.972031   21098 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:07:24.972050   21098 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:07:25.015374   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:25.038602   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:25.055589   21098 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:25.055612   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:07:25.151602   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:07:25.151634   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:07:25.172190   21098 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:07:25.172212   21098 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:07:25.405884   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:25.444500   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:07:25.444532   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:07:25.463903   21098 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:07:25.463928   21098 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:07:25.694161   21098 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:07:25.694186   21098 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:07:25.820674   21098 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:07:25.820702   21098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:07:26.073362   21098 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:07:26.073394   21098 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:07:26.236676   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:07:26.236699   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:07:26.439580   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:07:26.439601   21098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:07:26.439960   21098 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:26.439985   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:07:26.584141   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:07:26.584183   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:07:26.783005   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:26.907600   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:07:26.907633   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:07:27.113741   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.01702554s)
	I0831 22:07:27.113757   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.92136815s)
	I0831 22:07:27.113790   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.113800   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.113830   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.113849   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114071   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114123   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114136   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.114145   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114194   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114229   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114252   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114268   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.114277   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.114475   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114488   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.114509   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114523   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:27.114580   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.114592   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.185606   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:27.185631   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:27.185967   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:27.185985   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:27.328527   21098 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:27.328551   21098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:07:27.420622   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:28.677844   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.383721569s)
	I0831 22:07:28.677898   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.677918   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678012   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464218982s)
	I0831 22:07:28.678051   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678062   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678125   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678139   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678148   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678155   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678124   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678363   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678382   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678392   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678399   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678411   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:28.678423   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:28.678427   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678445   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:28.678604   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:28.678634   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:28.678641   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:30.778509   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:07:30.778553   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:30.781708   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:30.782089   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:30.782125   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:30.782277   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:30.782513   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:30.782693   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:30.782862   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:31.160940   21098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:07:31.262365   21098 addons.go:234] Setting addon gcp-auth=true in "addons-132210"
	I0831 22:07:31.262423   21098 host.go:66] Checking if "addons-132210" exists ...
	I0831 22:07:31.262727   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:31.262758   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:31.277512   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0831 22:07:31.277939   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:31.278419   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:31.278439   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:31.278698   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:31.279297   21098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:07:31.279351   21098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:07:31.294328   21098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I0831 22:07:31.294767   21098 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:07:31.295196   21098 main.go:141] libmachine: Using API Version  1
	I0831 22:07:31.295217   21098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:07:31.295567   21098 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:07:31.295765   21098 main.go:141] libmachine: (addons-132210) Calling .GetState
	I0831 22:07:31.297275   21098 main.go:141] libmachine: (addons-132210) Calling .DriverName
	I0831 22:07:31.297521   21098 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:07:31.297544   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHHostname
	I0831 22:07:31.300179   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:31.300578   21098 main.go:141] libmachine: (addons-132210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:57", ip: ""} in network mk-addons-132210: {Iface:virbr1 ExpiryTime:2024-08-31 23:06:53 +0000 UTC Type:0 Mac:52:54:00:35:a4:57 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-132210 Clientid:01:52:54:00:35:a4:57}
	I0831 22:07:31.300608   21098 main.go:141] libmachine: (addons-132210) DBG | domain addons-132210 has defined IP address 192.168.39.12 and MAC address 52:54:00:35:a4:57 in network mk-addons-132210
	I0831 22:07:31.300739   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHPort
	I0831 22:07:31.300921   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHKeyPath
	I0831 22:07:31.301090   21098 main.go:141] libmachine: (addons-132210) Calling .GetSSHUsername
	I0831 22:07:31.301236   21098 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/addons-132210/id_rsa Username:docker}
	I0831 22:07:32.605488   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.203266923s)
	I0831 22:07:32.605553   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.605587   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.605634   21098 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.156868741s)
	I0831 22:07:32.605738   21098 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.156819626s)
	I0831 22:07:32.605762   21098 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0831 22:07:32.606876   21098 node_ready.go:35] waiting up to 6m0s for node "addons-132210" to be "Ready" ...
	I0831 22:07:32.607056   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.089745734s)
	I0831 22:07:32.607084   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607095   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607118   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.016199589s)
	I0831 22:07:32.607152   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607164   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607211   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.845992141s)
	I0831 22:07:32.607230   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607245   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607248   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.702177169s)
	I0831 22:07:32.607264   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607279   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607359   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.591933506s)
	I0831 22:07:32.607385   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607396   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607840   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.569207893s)
	I0831 22:07:32.607890   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.607912   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.607980   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.607989   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608007   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608017   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608040   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.202125622s)
	W0831 22:07:32.608084   21098 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:32.608103   21098 retry.go:31] will retry after 213.169609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:32.608139   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608154   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608156   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608180   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608181   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608196   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608201   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608205   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608217   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608221   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.825175604s)
	I0831 22:07:32.608272   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608287   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608294   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608321   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608328   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608225   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608446   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608456   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608704   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.608733   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.608743   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.608759   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.608768   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.609174   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.609191   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.609201   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.609210   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.608880   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.610038   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.610082   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.610099   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.610108   21098 addons.go:475] Verifying addon ingress=true in "addons-132210"
	I0831 22:07:32.610320   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.610332   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.610339   21098 addons.go:475] Verifying addon registry=true in "addons-132210"
	I0831 22:07:32.611022   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611037   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611103   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611228   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611256   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611264   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611281   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.611290   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.611294   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611320   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611347   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611356   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.611364   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.611744   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611769   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.611785   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611796   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.611796   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.611805   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.612775   21098 out.go:177] * Verifying ingress addon...
	I0831 22:07:32.612947   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:32.612972   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.613355   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.613371   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.613380   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.612991   21098 out.go:177] * Verifying registry addon...
	I0831 22:07:32.613676   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.613692   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.613702   21098 addons.go:475] Verifying addon metrics-server=true in "addons-132210"
	I0831 22:07:32.613754   21098 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-132210 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:07:32.615291   21098 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:07:32.616400   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:07:32.633226   21098 node_ready.go:49] node "addons-132210" has status "Ready":"True"
	I0831 22:07:32.633254   21098 node_ready.go:38] duration metric: took 26.354748ms for node "addons-132210" to be "Ready" ...
	I0831 22:07:32.633267   21098 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:32.672510   21098 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:07:32.672535   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:32.672811   21098 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:07:32.672833   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:32.716505   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:32.716533   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:32.716849   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:32.716869   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:32.722171   21098 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.790958   21098 pod_ready.go:93] pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.790982   21098 pod_ready.go:82] duration metric: took 68.780152ms for pod "coredns-6f6b679f8f-fg5wn" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.790998   21098 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.822430   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:32.843686   21098 pod_ready.go:93] pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.843710   21098 pod_ready.go:82] duration metric: took 52.705196ms for pod "coredns-6f6b679f8f-lg2jj" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.843719   21098 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.894732   21098 pod_ready.go:93] pod "etcd-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.894755   21098 pod_ready.go:82] duration metric: took 51.029517ms for pod "etcd-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.894765   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.909271   21098 pod_ready.go:93] pod "kube-apiserver-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:32.909293   21098 pod_ready.go:82] duration metric: took 14.521596ms for pod "kube-apiserver-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:32.909302   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.013537   21098 pod_ready.go:93] pod "kube-controller-manager-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.013559   21098 pod_ready.go:82] duration metric: took 104.249609ms for pod "kube-controller-manager-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.013571   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pf4zb" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.127456   21098 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-132210" context rescaled to 1 replicas
	I0831 22:07:33.148736   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.257499   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.418853   21098 pod_ready.go:93] pod "kube-proxy-pf4zb" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.418877   21098 pod_ready.go:82] duration metric: took 405.299679ms for pod "kube-proxy-pf4zb" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.418890   21098 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.854578   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.855771   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.865760   21098 pod_ready.go:93] pod "kube-scheduler-addons-132210" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:33.865782   21098 pod_ready.go:82] duration metric: took 446.884331ms for pod "kube-scheduler-addons-132210" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:33.865796   21098 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:34.148775   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.148849   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.303845   21098 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.006297628s)
	I0831 22:07:34.303848   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.883150423s)
	I0831 22:07:34.304054   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.304074   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.304425   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.304447   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.304456   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.304467   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.304698   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.304719   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.304743   21098 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-132210"
	I0831 22:07:34.304787   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.305581   21098 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:34.306666   21098 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:07:34.308329   21098 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:07:34.309280   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:07:34.309726   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:07:34.309747   21098 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:07:34.329848   21098 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:07:34.329875   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.454442   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:07:34.454475   21098 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:07:34.518709   21098 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:34.518732   21098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:07:34.575530   21098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:34.579667   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.757184457s)
	I0831 22:07:34.579722   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.579737   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.580030   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.580053   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.580073   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.580089   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:34.580102   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:34.580283   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:34.580308   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:34.580311   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:34.619308   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.620410   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.814548   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.120705   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.121027   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.313455   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.628958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.629640   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.874670   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.924472   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:35.964663   21098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.389094024s)
	I0831 22:07:35.964728   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:35.964747   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:35.965086   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:35.965129   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:35.965146   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:35.965161   21098 main.go:141] libmachine: Making call to close driver server
	I0831 22:07:35.965177   21098 main.go:141] libmachine: (addons-132210) Calling .Close
	I0831 22:07:35.965478   21098 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:07:35.965495   21098 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:07:35.965500   21098 main.go:141] libmachine: (addons-132210) DBG | Closing plugin on server side
	I0831 22:07:35.967806   21098 addons.go:475] Verifying addon gcp-auth=true in "addons-132210"
	I0831 22:07:35.969545   21098 out.go:177] * Verifying gcp-auth addon...
	I0831 22:07:35.971896   21098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:07:35.999763   21098 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:07:35.999784   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:36.122605   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.123410   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.315123   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.475878   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:36.619752   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.620766   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.814203   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.975190   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:37.122336   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.122478   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.315341   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.475177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:37.620866   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.621439   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.814228   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.975613   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:38.120903   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.121229   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.314007   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.372392   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:38.475094   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:38.944270   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.944466   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.944638   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.977495   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:39.125969   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.126728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.313948   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.477476   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:39.620217   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:39.620445   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.814405   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.974903   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:40.121141   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.121755   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.314729   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.475251   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:40.620786   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:40.621250   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.814002   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.872198   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:41.005315   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:41.121910   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.122193   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.315886   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.476677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:41.621217   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:41.621565   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.823677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.977326   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:42.120209   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.120445   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.319015   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.476300   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:42.620896   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:42.621628   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.813805   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.872520   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:42.975650   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:43.119591   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.120374   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.316617   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.476126   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:43.619662   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.620425   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:43.815672   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.977099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:44.120689   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.120721   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.313640   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.474938   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:44.619883   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.620952   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:44.816734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.975512   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:45.119105   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.119826   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.313584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.380588   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:45.475926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:45.619771   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.620772   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:45.813745   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.975148   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.120296   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.120403   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.314008   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.475502   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.619407   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.619757   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:46.813669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.976377   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.121378   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.121861   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.320782   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.475797   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.620484   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.621120   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:47.817902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.873131   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:47.979915   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.120586   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.121010   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.314359   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.475174   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.620253   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:48.620967   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.813635   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.975699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.119734   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.120086   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.313782   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.475879   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.619985   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.621004   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:49.815468   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.873566   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:49.975581   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.120337   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.120541   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.314227   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.478135   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.622036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:50.622859   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.814060   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.975967   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.120306   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.121507   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:51.314547   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.475724   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.620114   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:51.620309   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.814022   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.976109   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.121801   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:52.122553   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.314307   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.372533   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:52.476431   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.619444   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.620536   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:52.814597   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.975521   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.120042   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.120210   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:53.314115   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.475728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.620177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:53.623813   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.814919   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.975959   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.120801   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.121168   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:54.315417   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.374460   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:54.476113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.619806   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.621022   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:54.815198   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.975080   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.120293   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.121322   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:55.314732   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.475687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.619856   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.620809   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:55.814765   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.975740   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.120854   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:56.121921   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.316560   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.475631   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.619589   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.620330   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:56.814597   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.872821   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:56.975866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.120787   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.120963   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:57.314895   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.476283   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.618831   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.620240   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:57.813768   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.975551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.121198   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:58.121479   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.314126   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.475209   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.620354   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.623406   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:58.817231   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.975135   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.120742   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.121902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:59.314224   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.372594   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:59.654374   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:59.654873   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.655101   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.814892   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.976412   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.121236   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:00.121952   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.314857   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.476585   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.620958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:00.621503   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.814717   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.975596   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.120556   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:01.121227   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.314332   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.373553   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:01.475855   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.620256   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:01.620695   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.817902   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.976941   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.120512   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:02.120709   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.315631   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.475468   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.621509   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:02.621785   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.814806   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.976174   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.120440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:03.120863   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.313700   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.475835   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.619665   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.621704   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:03.814121   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.872588   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:03.975298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.120824   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:04.121184   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.314338   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.475429   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.620540   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.620584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:04.815162   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.976895   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.120594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:05.120730   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.315865   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.476472   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.619151   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.619193   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:05.814469   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.873045   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:05.976083   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.120276   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.121632   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:06.316445   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.476113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.619879   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.621235   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:06.817665   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.977266   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.121891   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:07.125370   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.314681   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.475319   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.622891   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:07.623130   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.815134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.975338   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.120092   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.121833   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:08.314857   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:08.372618   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:08.475633   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.620926   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.622347   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.022099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.022480   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.120725   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.120911   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.314632   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.476068   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.620093   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.621293   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:09.814918   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.982257   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.120692   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.121929   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:10.314650   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.475440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.621191   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:10.621624   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.814610   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.871823   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:10.975582   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.120349   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:11.121548   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.314255   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.475551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.619270   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.619644   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:11.813295   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.976245   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.121122   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:12.121879   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.314903   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.475397   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.620793   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:12.621162   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.814057   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:12.872130   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:12.975754   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.133769   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:13.134318   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.314790   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.477695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.622634   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:13.624847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.821501   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:13.976538   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.119646   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.120341   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:14.315173   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.475306   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.621185   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.621510   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:14.814467   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:14.872822   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:14.976294   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.120441   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:15.121127   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.315400   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.475388   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.620578   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.620953   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:15.813943   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:15.979488   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.121495   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:16.121576   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.314944   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.475455   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.620506   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.620558   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:16.813569   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:16.872856   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:16.975991   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.120803   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:17.125876   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.314160   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.475916   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.620075   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:17.621270   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.815155   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:17.981149   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.120629   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.120785   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:18.315019   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.476099   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.620556   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.620934   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:18.814347   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:18.977438   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.120685   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:19.121338   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.315435   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.371445   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:19.475248   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.620321   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:19.620767   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.814394   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:19.975242   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.120360   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:20.120513   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.315529   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.484317   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.620297   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.620551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:20.814555   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:20.976127   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.120746   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.120965   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:21.315551   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.372806   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:21.476774   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.620656   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.621401   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:21.814726   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:21.975838   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:22.122780   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.126273   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:22.314614   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.476790   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:22.619929   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.622675   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:22.814144   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:22.975643   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:23.119721   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.120559   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:23.315087   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.474923   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:23.619836   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:23.621736   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:23.813687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:23.871468   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:23.976699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:24.120045   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.123398   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:24.602840   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:24.603194   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.619810   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:24.621697   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:24.814715   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:24.975695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:25.120948   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:25.121392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.318299   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:25.476633   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:25.619392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:25.620445   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:25.814377   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:25.872649   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:25.976267   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:26.122178   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:26.122596   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:26.314825   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.474926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:26.620117   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:26.620392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:26.815236   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:26.976263   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:27.122244   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:27.126825   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:27.314503   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:27.475451   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:27.619077   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:27.620128   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:27.814505   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:27.976659   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:28.119847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:28.119956   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:28.315111   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:28.373901   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:28.477178   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:28.621847   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:28.622419   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:28.814623   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:28.975971   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:29.120702   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:29.126856   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:08:29.333033   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:29.475641   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:29.620251   21098 kapi.go:107] duration metric: took 57.003845187s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:08:29.620894   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:29.813428   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:29.976100   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:30.120301   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:30.315054   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:30.475927   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:30.621321   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:30.816504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:30.873025   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:30.976290   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:31.120152   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:31.316147   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:31.476032   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:31.620260   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:31.816255   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:31.975740   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:32.122583   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:32.314298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:32.475815   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:32.620031   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:32.814337   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:32.873931   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:32.976076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:33.127234   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:33.313541   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:33.475361   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:33.619918   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:33.814036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:33.975222   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:34.119967   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:34.314700   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:34.476130   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:34.619753   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:34.815637   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:34.975904   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:35.119845   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:35.314907   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:35.372290   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:35.475061   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:35.620392   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:35.814214   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:35.975293   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:36.120499   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:36.315134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:36.476924   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:36.625728   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:36.815568   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:36.975977   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:37.119760   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:37.314098   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:37.475403   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:37.619353   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:37.814409   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:37.872370   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:38.414352   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:38.422314   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:38.422534   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:38.475478   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:38.620548   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:38.814646   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:38.978424   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:39.120310   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:39.315834   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:39.476326   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:39.619867   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:39.813168   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:39.875054   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:39.983870   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:40.119802   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:40.381691   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:40.480228   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:40.621421   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:40.815148   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:40.975440   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.119699   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:41.314866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:41.475833   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.619956   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:41.813677   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:41.975111   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.121321   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:42.314456   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:42.372543   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:42.475460   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.619163   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:42.814929   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:42.975788   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.120305   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:43.314076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:43.475628   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.620272   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:43.822113   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:43.976312   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.119884   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:44.319618   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:44.381557   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:44.477017   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.621506   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:44.826669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:44.976036   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.123433   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:45.313890   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:45.476804   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.619848   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:45.813116   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:45.976701   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.119113   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:46.313958   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:46.477472   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.620824   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:46.952945   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:46.956360   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:46.975185   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.120135   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:47.325549   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:47.476182   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.618992   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:47.815679   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:47.976615   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.119381   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:48.317018   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:48.476286   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.620330   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:48.814281   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:48.976023   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.119819   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:49.314898   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:49.372370   21098 pod_ready.go:103] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"False"
	I0831 22:08:49.475523   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.679647   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:49.815584   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:49.975653   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.119243   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:50.314821   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:50.493960   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.620412   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:50.814454   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:50.878784   21098 pod_ready.go:93] pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace has status "Ready":"True"
	I0831 22:08:50.878806   21098 pod_ready.go:82] duration metric: took 1m17.013002962s for pod "metrics-server-84c5f94fbc-4mp2p" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.878816   21098 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.884470   21098 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace has status "Ready":"True"
	I0831 22:08:50.884489   21098 pod_ready.go:82] duration metric: took 5.665136ms for pod "nvidia-device-plugin-daemonset-99v85" in "kube-system" namespace to be "Ready" ...
	I0831 22:08:50.884509   21098 pod_ready.go:39] duration metric: took 1m18.251226521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:08:50.884533   21098 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:08:50.884580   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:08:50.884638   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:08:50.955600   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:08:50.955626   21098 cri.go:89] found id: ""
	I0831 22:08:50.955635   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:08:50.955684   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:50.971435   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:08:50.971500   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:08:50.979153   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:51.029305   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:08:51.029329   21098 cri.go:89] found id: ""
	I0831 22:08:51.029338   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:08:51.029396   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.033768   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:08:51.033831   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:08:51.108642   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:08:51.108669   21098 cri.go:89] found id: ""
	I0831 22:08:51.108680   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:08:51.108740   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.114938   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:08:51.115012   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:08:51.121354   21098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:51.227554   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:08:51.227577   21098 cri.go:89] found id: ""
	I0831 22:08:51.227585   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:08:51.227629   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.242323   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:08:51.242407   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:08:51.306299   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:08:51.306319   21098 cri.go:89] found id: ""
	I0831 22:08:51.306327   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:08:51.306389   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.316849   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:51.317332   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:08:51.317392   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:08:51.404448   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:08:51.404466   21098 cri.go:89] found id: ""
	I0831 22:08:51.404472   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:08:51.404524   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:08:51.411682   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:08:51.411753   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:08:51.468597   21098 cri.go:89] found id: ""
	I0831 22:08:51.468623   21098 logs.go:276] 0 containers: []
	W0831 22:08:51.468631   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:08:51.468639   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:08:51.468651   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 22:08:51.482196   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0831 22:08:51.533263   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.533431   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:51.533563   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.533721   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:51.545028   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:08:51.545188   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:08:51.564495   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:08:51.564525   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:08:51.624037   21098 kapi.go:107] duration metric: took 1m19.008743885s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:08:51.815909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:51.850860   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:08:51.850908   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:08:51.976237   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.014670   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:08:52.014708   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:08:52.123496   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:08:52.123543   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:08:52.174958   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:08:52.175006   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:08:52.267648   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:08:52.267686   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:08:52.313784   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:52.334510   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:08:52.334536   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:08:52.388833   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:08:52.388872   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:08:52.458242   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:08:52.458270   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:08:52.475384   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.552472   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:08:52.552502   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:08:52.850283   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:52.937891   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:08:52.937926   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:08:52.937989   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:08:52.938003   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:52.938015   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:08:52.938039   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:08:52.938050   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:08:52.938058   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:08:52.938065   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:08:52.938073   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:08:52.978298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.315067   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:53.475986   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.817131   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.151054   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.314831   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.476234   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.816394   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:54.975421   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.315703   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:55.482514   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.815728   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:55.974892   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.314245   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:56.475975   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.814011   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:56.976504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.313628   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:57.475060   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.814335   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:57.976408   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.314175   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:58.475969   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.815045   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:58.975678   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.314157   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:59.475913   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.814537   21098 kapi.go:107] duration metric: took 1m25.505259155s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:08:59.976603   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.476062   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.976224   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.477863   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.975298   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.476482   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.939628   21098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:09:02.961175   21098 api_server.go:72] duration metric: took 1m39.375038741s to wait for apiserver process to appear ...
	I0831 22:09:02.961200   21098 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:09:02.961237   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:09:02.961303   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:09:02.975877   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.999945   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:02.999964   21098 cri.go:89] found id: ""
	I0831 22:09:02.999971   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:09:03.000020   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.005045   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:09:03.005117   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:09:03.053454   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:03.053480   21098 cri.go:89] found id: ""
	I0831 22:09:03.053492   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:09:03.053548   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.057843   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:09:03.057918   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:09:03.102107   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:03.102134   21098 cri.go:89] found id: ""
	I0831 22:09:03.102144   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:09:03.102201   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.106758   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:09:03.106833   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:09:03.151303   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:03.151343   21098 cri.go:89] found id: ""
	I0831 22:09:03.151353   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:09:03.151431   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.155739   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:09:03.155817   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:09:03.212323   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:03.212348   21098 cri.go:89] found id: ""
	I0831 22:09:03.212357   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:09:03.212414   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.217064   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:09:03.217124   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:09:03.258208   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:03.258239   21098 cri.go:89] found id: ""
	I0831 22:09:03.258249   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:09:03.258311   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:03.262725   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:09:03.262794   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:09:03.304036   21098 cri.go:89] found id: ""
	I0831 22:09:03.304062   21098 logs.go:276] 0 containers: []
	W0831 22:09:03.304070   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:09:03.304077   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:09:03.304095   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:03.342633   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:09:03.342660   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:09:03.400297   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:09:03.400335   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:09:03.415806   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:09:03.415833   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:09:03.476498   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:03.538271   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:09:03.538303   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:03.602863   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:09:03.602897   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:03.663903   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:09:03.663936   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:03.737918   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:09:03.737948   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:03.788384   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:09:03.788419   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:09:03.838952   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.839121   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:03.839261   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.839450   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:03.850735   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:03.850895   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:03.871047   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:09:03.871072   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:03.931950   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:09:03.931983   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:09:03.975839   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.476679   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.492557   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:04.492594   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:09:04.492657   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:09:04.492672   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:04.492685   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:04.492696   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:04.492705   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:04.492716   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:04.492725   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:04.492737   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:09:04.975687   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.475569   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.975871   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.476108   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.975461   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.476261   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.976037   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.475699   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.975874   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.476000   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.975995   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:10.475521   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.195175   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.476002   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.975232   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.476158   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.975602   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.475134   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.976504   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.475926   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.493799   21098 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0831 22:09:14.501337   21098 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0831 22:09:14.502516   21098 api_server.go:141] control plane version: v1.31.0
	I0831 22:09:14.502536   21098 api_server.go:131] duration metric: took 11.541329499s to wait for apiserver health ...
	I0831 22:09:14.502547   21098 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:09:14.502568   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:09:14.502621   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:09:14.542688   21098 cri.go:89] found id: "d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:14.542712   21098 cri.go:89] found id: ""
	I0831 22:09:14.542721   21098 logs.go:276] 1 containers: [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887]
	I0831 22:09:14.542778   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.547207   21098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:09:14.547265   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:09:14.585253   21098 cri.go:89] found id: "9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:14.585277   21098 cri.go:89] found id: ""
	I0831 22:09:14.585285   21098 logs.go:276] 1 containers: [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9]
	I0831 22:09:14.585348   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.589951   21098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:09:14.590001   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:09:14.634151   21098 cri.go:89] found id: "8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:14.634171   21098 cri.go:89] found id: ""
	I0831 22:09:14.634178   21098 logs.go:276] 1 containers: [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523]
	I0831 22:09:14.634221   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.640116   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:09:14.640196   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:09:14.692606   21098 cri.go:89] found id: "ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:14.692629   21098 cri.go:89] found id: ""
	I0831 22:09:14.692636   21098 logs.go:276] 1 containers: [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da]
	I0831 22:09:14.692684   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.699229   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:09:14.699294   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:09:14.736751   21098 cri.go:89] found id: "dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:14.736777   21098 cri.go:89] found id: ""
	I0831 22:09:14.736785   21098 logs.go:276] 1 containers: [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c]
	I0831 22:09:14.736838   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.741521   21098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:09:14.741573   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:09:14.780419   21098 cri.go:89] found id: "88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:14.780448   21098 cri.go:89] found id: ""
	I0831 22:09:14.780456   21098 logs.go:276] 1 containers: [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e]
	I0831 22:09:14.780501   21098 ssh_runner.go:195] Run: which crictl
	I0831 22:09:14.785331   21098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:09:14.785397   21098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:09:14.832330   21098 cri.go:89] found id: ""
	I0831 22:09:14.832353   21098 logs.go:276] 0 containers: []
	W0831 22:09:14.832362   21098 logs.go:278] No container was found matching "kindnet"
	I0831 22:09:14.832371   21098 logs.go:123] Gathering logs for dmesg ...
	I0831 22:09:14.832385   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:09:14.849233   21098 logs.go:123] Gathering logs for kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] ...
	I0831 22:09:14.849266   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887"
	I0831 22:09:14.894187   21098 logs.go:123] Gathering logs for coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] ...
	I0831 22:09:14.894215   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523"
	I0831 22:09:14.932967   21098 logs.go:123] Gathering logs for container status ...
	I0831 22:09:14.933040   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:09:14.975669   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.995013   21098 logs.go:123] Gathering logs for kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] ...
	I0831 22:09:14.995045   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e"
	I0831 22:09:15.054114   21098 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:09:15.054155   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:09:15.476598   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:15.938089   21098 logs.go:123] Gathering logs for kubelet ...
	I0831 22:09:15.938136   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 22:09:15.975959   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0831 22:09:15.992400   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006106    1197 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:15.992568   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:15.992739   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:15.992917   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.005184   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.005355   21098 logs.go:138] Found kubelet problem: Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:16.027347   21098 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:09:16.027382   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:09:16.173595   21098 logs.go:123] Gathering logs for etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] ...
	I0831 22:09:16.173623   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9"
	I0831 22:09:16.260126   21098 logs.go:123] Gathering logs for kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] ...
	I0831 22:09:16.260162   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da"
	I0831 22:09:16.304110   21098 logs.go:123] Gathering logs for kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] ...
	I0831 22:09:16.304147   21098 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c"
	I0831 22:09:16.351377   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:16.351404   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:09:16.351460   21098 out.go:270] X Problems detected in kubelet:
	W0831 22:09:16.351474   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006162    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.351483   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: W0831 22:07:27.006214    1197 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-132210" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.351493   21098 out.go:270]   Aug 31 22:07:27 addons-132210 kubelet[1197]: E0831 22:07:27.006224    1197 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	W0831 22:09:16.351510   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: W0831 22:07:35.942516    1197 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-132210" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-132210' and this object
	W0831 22:09:16.351521   21098 out.go:270]   Aug 31 22:07:35 addons-132210 kubelet[1197]: E0831 22:07:35.942576    1197 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-132210\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-132210' and this object" logger="UnhandledError"
	I0831 22:09:16.351531   21098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:09:16.351541   21098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:09:16.477457   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:16.975815   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.475770   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.979376   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.475592   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.976801   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.476121   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.977073   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.475240   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.976681   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.475484   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.976058   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.475479   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.975925   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.475911   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.976177   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.475909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.975151   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.476109   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.975695   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.362028   21098 system_pods.go:59] 18 kube-system pods found
	I0831 22:09:26.362061   21098 system_pods.go:61] "coredns-6f6b679f8f-fg5wn" [44101eb2-e5ab-4205-8770-fcd8e3e7c877] Running
	I0831 22:09:26.362066   21098 system_pods.go:61] "csi-hostpath-attacher-0" [d5e59cee-4aef-4a71-8e87-a17016deb8aa] Running
	I0831 22:09:26.362070   21098 system_pods.go:61] "csi-hostpath-resizer-0" [1472dd5a-623f-4e1b-bb88-aa9737965d61] Running
	I0831 22:09:26.362073   21098 system_pods.go:61] "csi-hostpathplugin-f9r7t" [c332f2e3-d867-4e1b-b27f-62b8ff234fb8] Running
	I0831 22:09:26.362077   21098 system_pods.go:61] "etcd-addons-132210" [78c4bd71-140b-49f9-8bc1-4b4e1f3e77e1] Running
	I0831 22:09:26.362080   21098 system_pods.go:61] "kube-apiserver-addons-132210" [266d225a-02ab-4449-bc78-88940e2e01be] Running
	I0831 22:09:26.362083   21098 system_pods.go:61] "kube-controller-manager-addons-132210" [efd3eb72-530e-4d83-9f80-ed4252c65edb] Running
	I0831 22:09:26.362086   21098 system_pods.go:61] "kube-ingress-dns-minikube" [0e0b7880-36a9-4588-b4f2-69ee4d28f341] Running
	I0831 22:09:26.362089   21098 system_pods.go:61] "kube-proxy-pf4zb" [d398a8b8-eef4-41b1-945b-bf73a594737e] Running
	I0831 22:09:26.362092   21098 system_pods.go:61] "kube-scheduler-addons-132210" [40d172ae-efff-4b60-b47f-86e58c381de7] Running
	I0831 22:09:26.362095   21098 system_pods.go:61] "metrics-server-84c5f94fbc-4mp2p" [9f5c8bca-8c7c-4216-b875-066e9a9fb36a] Running
	I0831 22:09:26.362099   21098 system_pods.go:61] "nvidia-device-plugin-daemonset-99v85" [54398aec-2cfe-4328-a845-e1bd4bbfc99f] Running
	I0831 22:09:26.362102   21098 system_pods.go:61] "registry-6fb4cdfc84-gxktn" [1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78] Running
	I0831 22:09:26.362105   21098 system_pods.go:61] "registry-proxy-n7rzz" [49867dc1-8d92-48f0-8c8b-50a65936ad12] Running
	I0831 22:09:26.362108   21098 system_pods.go:61] "snapshot-controller-56fcc65765-d8zmh" [842cfb93-bc24-4a0f-8191-8cff822e4981] Running
	I0831 22:09:26.362111   21098 system_pods.go:61] "snapshot-controller-56fcc65765-vz7w2" [879946b9-6f92-4ad5-8e18-84154122b30a] Running
	I0831 22:09:26.362115   21098 system_pods.go:61] "storage-provisioner" [7444df94-b591-414e-bb8f-6eecc8fb06c5] Running
	I0831 22:09:26.362119   21098 system_pods.go:61] "tiller-deploy-b48cc5f79-lljvg" [d3d10da4-8063-4e9f-a3a6-d02d24b61855] Running
	I0831 22:09:26.362128   21098 system_pods.go:74] duration metric: took 11.859574121s to wait for pod list to return data ...
	I0831 22:09:26.362140   21098 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:09:26.364694   21098 default_sa.go:45] found service account: "default"
	I0831 22:09:26.364718   21098 default_sa.go:55] duration metric: took 2.572024ms for default service account to be created ...
	I0831 22:09:26.364726   21098 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:09:26.371946   21098 system_pods.go:86] 18 kube-system pods found
	I0831 22:09:26.371979   21098 system_pods.go:89] "coredns-6f6b679f8f-fg5wn" [44101eb2-e5ab-4205-8770-fcd8e3e7c877] Running
	I0831 22:09:26.371985   21098 system_pods.go:89] "csi-hostpath-attacher-0" [d5e59cee-4aef-4a71-8e87-a17016deb8aa] Running
	I0831 22:09:26.371989   21098 system_pods.go:89] "csi-hostpath-resizer-0" [1472dd5a-623f-4e1b-bb88-aa9737965d61] Running
	I0831 22:09:26.371993   21098 system_pods.go:89] "csi-hostpathplugin-f9r7t" [c332f2e3-d867-4e1b-b27f-62b8ff234fb8] Running
	I0831 22:09:26.371997   21098 system_pods.go:89] "etcd-addons-132210" [78c4bd71-140b-49f9-8bc1-4b4e1f3e77e1] Running
	I0831 22:09:26.372000   21098 system_pods.go:89] "kube-apiserver-addons-132210" [266d225a-02ab-4449-bc78-88940e2e01be] Running
	I0831 22:09:26.372003   21098 system_pods.go:89] "kube-controller-manager-addons-132210" [efd3eb72-530e-4d83-9f80-ed4252c65edb] Running
	I0831 22:09:26.372007   21098 system_pods.go:89] "kube-ingress-dns-minikube" [0e0b7880-36a9-4588-b4f2-69ee4d28f341] Running
	I0831 22:09:26.372011   21098 system_pods.go:89] "kube-proxy-pf4zb" [d398a8b8-eef4-41b1-945b-bf73a594737e] Running
	I0831 22:09:26.372014   21098 system_pods.go:89] "kube-scheduler-addons-132210" [40d172ae-efff-4b60-b47f-86e58c381de7] Running
	I0831 22:09:26.372017   21098 system_pods.go:89] "metrics-server-84c5f94fbc-4mp2p" [9f5c8bca-8c7c-4216-b875-066e9a9fb36a] Running
	I0831 22:09:26.372020   21098 system_pods.go:89] "nvidia-device-plugin-daemonset-99v85" [54398aec-2cfe-4328-a845-e1bd4bbfc99f] Running
	I0831 22:09:26.372023   21098 system_pods.go:89] "registry-6fb4cdfc84-gxktn" [1fb4c0a2-6bf0-41ab-8539-9d0bdb976d78] Running
	I0831 22:09:26.372046   21098 system_pods.go:89] "registry-proxy-n7rzz" [49867dc1-8d92-48f0-8c8b-50a65936ad12] Running
	I0831 22:09:26.372053   21098 system_pods.go:89] "snapshot-controller-56fcc65765-d8zmh" [842cfb93-bc24-4a0f-8191-8cff822e4981] Running
	I0831 22:09:26.372057   21098 system_pods.go:89] "snapshot-controller-56fcc65765-vz7w2" [879946b9-6f92-4ad5-8e18-84154122b30a] Running
	I0831 22:09:26.372060   21098 system_pods.go:89] "storage-provisioner" [7444df94-b591-414e-bb8f-6eecc8fb06c5] Running
	I0831 22:09:26.372063   21098 system_pods.go:89] "tiller-deploy-b48cc5f79-lljvg" [d3d10da4-8063-4e9f-a3a6-d02d24b61855] Running
	I0831 22:09:26.372068   21098 system_pods.go:126] duration metric: took 7.338208ms to wait for k8s-apps to be running ...
	I0831 22:09:26.372077   21098 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:09:26.372143   21098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:09:26.387943   21098 system_svc.go:56] duration metric: took 15.858116ms WaitForService to wait for kubelet
	I0831 22:09:26.387974   21098 kubeadm.go:582] duration metric: took 2m2.801840351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:09:26.387995   21098 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:09:26.390995   21098 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:09:26.391021   21098 node_conditions.go:123] node cpu capacity is 2
	I0831 22:09:26.391033   21098 node_conditions.go:105] duration metric: took 3.032634ms to run NodePressure ...
	I0831 22:09:26.391043   21098 start.go:241] waiting for startup goroutines ...
	I0831 22:09:26.475914   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.975777   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.476954   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.975206   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.476090   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.975734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.475698   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.976296   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.476559   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.975576   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.477596   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.975909   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.475130   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.975291   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.476041   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.975866   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.475356   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.976258   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.475594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.975538   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.475516   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.975882   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.475912   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.980397   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.476464   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.976629   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.476682   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.977594   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.476050   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.975586   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.476076   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.988997   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.475034   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.976591   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.476154   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.975736   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.476250   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.976670   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.476952   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.975160   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.475606   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.976118   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.476033   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.975996   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.475583   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.976184   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.475823   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:49.975703   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.476541   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:50.976407   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:51.476083   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:51.976078   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:52.475636   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:52.977028   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:53.475427   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:53.976231   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:54.475762   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:54.975423   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:55.480634   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:55.976191   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:56.475501   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:56.976688   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:57.477084   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:57.975727   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:58.476734   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:58.975704   21098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:59.475793   21098 kapi.go:107] duration metric: took 2m23.503891799s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:09:59.477292   21098 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-132210 cluster.
	I0831 22:09:59.478644   21098 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:09:59.479814   21098 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:09:59.481180   21098 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, nvidia-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, helm-tiller, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0831 22:09:59.482381   21098 addons.go:510] duration metric: took 2m35.8961992s for enable addons: enabled=[cloud-spanner default-storageclass nvidia-device-plugin storage-provisioner ingress-dns inspektor-gadget helm-tiller metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0831 22:09:59.482411   21098 start.go:246] waiting for cluster config update ...
	I0831 22:09:59.482427   21098 start.go:255] writing updated cluster config ...
	I0831 22:09:59.482654   21098 ssh_runner.go:195] Run: rm -f paused
	I0831 22:09:59.531140   21098 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:09:59.533137   21098 out.go:177] * Done! kubectl is now configured to use "addons-132210" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.855710390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142984855683770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8048b7f7-2171-4003-be5e-a9bd39f4c8c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.856285580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=577c68b0-6ac8-4c45-b689-e4cba6d19dd6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.856376792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=577c68b0-6ac8-4c45-b689-e4cba6d19dd6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.856655156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7458fbd94e22ffa91cad2483d94657266f045e1cd14f703942f5fdd4dfcd5346,PodSandboxId:efbe2df8b713ce5f1978dacc3d8bc60dc8e8abed9ce5c7a1a3de86e89fd988c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725142899501781859,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bh4sk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58f13e00-b249-4877-a309-dba5324d1975,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f11da1cb74f181b238eb28dbf9d14c991a5b24d9355d2661753e69c7566cd5,PodSandboxId:88099b0ca1ae0809c0730e0a5318fa453aa4b2f35b98d96565b3807d3328aed1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725142757309219116,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c9d33c8-b37d-4376-9ade-e9dcf4168c22,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a
7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f40292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt
:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image
:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7
d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=577c68b0-6ac8-4c45-b689-e4cba6d19dd6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.891752925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5f37936-682a-4a7a-943a-d5170b32eb95 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.891823441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5f37936-682a-4a7a-943a-d5170b32eb95 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.892877045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f49a185-389a-4e92-a19c-ef388a8c3c8b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.894117002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142984894092025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f49a185-389a-4e92-a19c-ef388a8c3c8b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.894733573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5e1e7f6-9ced-4b17-ae73-d93e7e2589a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.894805673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5e1e7f6-9ced-4b17-ae73-d93e7e2589a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.895119431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7458fbd94e22ffa91cad2483d94657266f045e1cd14f703942f5fdd4dfcd5346,PodSandboxId:efbe2df8b713ce5f1978dacc3d8bc60dc8e8abed9ce5c7a1a3de86e89fd988c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725142899501781859,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bh4sk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58f13e00-b249-4877-a309-dba5324d1975,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f11da1cb74f181b238eb28dbf9d14c991a5b24d9355d2661753e69c7566cd5,PodSandboxId:88099b0ca1ae0809c0730e0a5318fa453aa4b2f35b98d96565b3807d3328aed1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725142757309219116,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c9d33c8-b37d-4376-9ade-e9dcf4168c22,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a
7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f40292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt
:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image
:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7
d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5e1e7f6-9ced-4b17-ae73-d93e7e2589a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.940311079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad0db28f-da17-4fa7-aa2e-a4a662e13502 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.940449293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad0db28f-da17-4fa7-aa2e-a4a662e13502 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.941718209Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a72cec53-27e2-4498-ace7-3a550a61d997 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.942691050Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb.CQNST2\"" file="server/server.go:805"
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.942729959Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb.CQNST2\"" file="server/server.go:805"
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.942770556Z" level=debug msg="Container or sandbox exited: 7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb.CQNST2" file="server/server.go:810"
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.942826143Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb\"" file="server/server.go:805"
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.943057453Z" level=debug msg="Container or sandbox exited: 7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb" file="server/server.go:810"
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.943086863Z" level=debug msg="container exited and found: 7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb" file="server/server.go:825"
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.943306405Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb.CQNST2\"" file="server/server.go:805"
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.944184392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142984944156495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a72cec53-27e2-4498-ace7-3a550a61d997 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.944879890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec4cdab6-4c28-4dae-8c49-038b82a3212a name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.944990799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec4cdab6-4c28-4dae-8c49-038b82a3212a name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:23:04 addons-132210 crio[663]: time="2024-08-31 22:23:04.945300207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7458fbd94e22ffa91cad2483d94657266f045e1cd14f703942f5fdd4dfcd5346,PodSandboxId:efbe2df8b713ce5f1978dacc3d8bc60dc8e8abed9ce5c7a1a3de86e89fd988c8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725142899501781859,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bh4sk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58f13e00-b249-4877-a309-dba5324d1975,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f11da1cb74f181b238eb28dbf9d14c991a5b24d9355d2661753e69c7566cd5,PodSandboxId:88099b0ca1ae0809c0730e0a5318fa453aa4b2f35b98d96565b3807d3328aed1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725142757309219116,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c9d33c8-b37d-4376-9ade-e9dcf4168c22,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcfee929f65d2d36211b1786446804660d67bbf43d508d1fba566e685fc6c0,PodSandboxId:dc2ee3e74ad9422ccac6783b988e3f5a956b7942b6418b8d9f20bd191346de55,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1725142753157959726,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-zb4l7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: ebe68c93-bd00-4fed-bf1c-dbf120b29acd,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0,PodSandboxId:a65bfb6d507f4b97758fcdf6c5bb014de49629343b5875b2ef0fe6b17159536a,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725142198205706615,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-6n2z6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: eac88b74-6230-4d8c-8317-9845d7cfdf8b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb,PodSandboxId:c04f5bd8263541b5cff476ff0ae185fb33292e2233ced82ae0ab73d6944a4936,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1725142060941479356,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4mp2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c8bca-8c7c-4216-b875-066e9a9fb36a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e,PodSandboxId:e7805858822ce862cdff2848a2f398056193d1af518c28f6de5c51a5df932198,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725142052138237865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7444df94-b591-414e-bb8f-6eecc8fb06c5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523,PodSandboxId:c9d76344783a2ddd77613ce5e2cf5bebacde1e392340bc2dd90ad6bc6584b641,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a
7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725142047629762697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fg5wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44101eb2-e5ab-4205-8770-fcd8e3e7c877,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c,PodSandboxId:cd53e58a6020b64efa873aa088e03d2314785006507be53bc645124248e4da93,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725142045006003102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf4zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d398a8b8-eef4-41b1-945b-bf73a594737e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e,PodSandboxId:1cce6cbc6a4faab96a418d403d12827e1afd496b8b40c6dd34aa37d9a9864fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725142033697029807,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f34a4b3a35bc052fdbc4eb18cc9c5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887,PodSandboxId:e2253778a2445365015d46ff9b6f47deab19c3a758b07f40292d937170fc4469,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt
:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725142033694433987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a0129139dae5ed440c87eb580bdbc49,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9,PodSandboxId:54cbd2b4b9e2e479d7b725cc9b9b5468ed6b4a901cc2a54a7471cafe91d20c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image
:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725142033681287217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f45e4b932d5a25119726105258f3e1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da,PodSandboxId:3f1a88db7a62d6e58893547e5822f7431056b7d0318d3b559f5a295a851c3d8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7
d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725142033466549261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-132210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9ccfab0f761103f3306ea3afe127ef,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec4cdab6-4c28-4dae-8c49-038b82a3212a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7458fbd94e22f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   efbe2df8b713c       hello-world-app-55bf9c44b4-bh4sk
	23f11da1cb74f       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         3 minutes ago        Running             nginx                     0                   88099b0ca1ae0       nginx
	e8dcfee929f65       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   3 minutes ago        Running             headlamp                  0                   dc2ee3e74ad94       headlamp-57fb76fcdb-zb4l7
	a5e788d23e628       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   a65bfb6d507f4       gcp-auth-89d5ffd79-6n2z6
	7ef4a6c40dbe3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago       Running             metrics-server            0                   c04f5bd826354       metrics-server-84c5f94fbc-4mp2p
	0b70bc07a6fec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago       Running             storage-provisioner       0                   e7805858822ce       storage-provisioner
	8bb7c1b21e074       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        15 minutes ago       Running             coredns                   0                   c9d76344783a2       coredns-6f6b679f8f-fg5wn
	dc9d1779c9ec0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        15 minutes ago       Running             kube-proxy                0                   cd53e58a6020b       kube-proxy-pf4zb
	88f24112cdf2e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        15 minutes ago       Running             kube-controller-manager   0                   1cce6cbc6a4fa       kube-controller-manager-addons-132210
	d5a6630200902       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        15 minutes ago       Running             kube-apiserver            0                   e2253778a2445       kube-apiserver-addons-132210
	9e07eecb0bd41       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago       Running             etcd                      0                   54cbd2b4b9e2e       etcd-addons-132210
	ea40b4dfb934e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        15 minutes ago       Running             kube-scheduler            0                   3f1a88db7a62d       kube-scheduler-addons-132210
	
	
	==> coredns [8bb7c1b21e074f65d779e7aa011ddb6ca7fafeb2ac24f7576ff0d428146b1523] <==
	[INFO] 10.244.0.8:59871 - 44836 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110421s
	[INFO] 10.244.0.8:33356 - 42014 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135624s
	[INFO] 10.244.0.8:33356 - 8221 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000242056s
	[INFO] 10.244.0.8:35585 - 13377 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041231s
	[INFO] 10.244.0.8:35585 - 3142 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000183049s
	[INFO] 10.244.0.8:47934 - 56724 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038372s
	[INFO] 10.244.0.8:47934 - 6297 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105405s
	[INFO] 10.244.0.8:48416 - 43339 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095854s
	[INFO] 10.244.0.8:48416 - 20808 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089325s
	[INFO] 10.244.0.8:60809 - 24507 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090972s
	[INFO] 10.244.0.8:60809 - 27316 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000241444s
	[INFO] 10.244.0.8:39141 - 61060 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013393s
	[INFO] 10.244.0.8:39141 - 6786 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000294732s
	[INFO] 10.244.0.8:47336 - 11940 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039145s
	[INFO] 10.244.0.8:47336 - 21158 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101081s
	[INFO] 10.244.0.8:36849 - 58078 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000195322s
	[INFO] 10.244.0.8:36849 - 19164 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000290198s
	[INFO] 10.244.0.22:57715 - 978 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363634s
	[INFO] 10.244.0.22:36290 - 10290 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000102337s
	[INFO] 10.244.0.22:59607 - 56162 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068575s
	[INFO] 10.244.0.22:57832 - 20486 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115987s
	[INFO] 10.244.0.22:47101 - 58158 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072188s
	[INFO] 10.244.0.22:54115 - 35881 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059499s
	[INFO] 10.244.0.22:38928 - 44111 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.003739828s
	[INFO] 10.244.0.22:51045 - 42584 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003766695s
	
	
	==> describe nodes <==
	Name:               addons-132210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-132210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-132210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_07_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-132210
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:07:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-132210
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:22:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:21:57 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:21:57 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:21:57 +0000   Sat, 31 Aug 2024 22:07:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:21:57 +0000   Sat, 31 Aug 2024 22:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    addons-132210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 12c3f930f06943eb9eedcbe740b437c1
	  System UUID:                12c3f930-f069-43eb-9eed-cbe740b437c1
	  Boot ID:                    0c2dfdc3-b8db-4280-8b08-729176a830ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-bh4sk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  gcp-auth                    gcp-auth-89d5ffd79-6n2z6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  headlamp                    headlamp-57fb76fcdb-zb4l7                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-6f6b679f8f-fg5wn                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-132210                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-132210             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-132210    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-pf4zb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-132210             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-132210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-132210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-132210 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-132210 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-132210 event: Registered Node addons-132210 in Controller
	
	
	==> dmesg <==
	[Aug31 22:08] kauditd_printk_skb: 41 callbacks suppressed
	[ +10.213253] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.886474] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.896138] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.604296] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.756625] kauditd_printk_skb: 12 callbacks suppressed
	[Aug31 22:09] kauditd_printk_skb: 12 callbacks suppressed
	[ +32.975043] kauditd_printk_skb: 32 callbacks suppressed
	[ +15.460927] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.545206] kauditd_printk_skb: 2 callbacks suppressed
	[Aug31 22:10] kauditd_printk_skb: 9 callbacks suppressed
	[Aug31 22:11] kauditd_printk_skb: 28 callbacks suppressed
	[Aug31 22:14] kauditd_printk_skb: 28 callbacks suppressed
	[Aug31 22:18] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.322338] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.045430] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.602574] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.435262] kauditd_printk_skb: 2 callbacks suppressed
	[ +19.828071] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.293926] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.470597] kauditd_printk_skb: 6 callbacks suppressed
	[Aug31 22:19] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.179886] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.551519] kauditd_printk_skb: 41 callbacks suppressed
	[Aug31 22:21] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9e07eecb0bd416c26a0e0971923f332cf73c99b3f524fc0835e63e96d5f35fb9] <==
	{"level":"info","ts":"2024-08-31T22:08:46.932596Z","caller":"traceutil/trace.go:171","msg":"trace[1095917297] linearizableReadLoop","detail":"{readStateIndex:1115; appliedIndex:1114; }","duration":"131.800969ms","start":"2024-08-31T22:08:46.800782Z","end":"2024-08-31T22:08:46.932583Z","steps":["trace[1095917297] 'read index received'  (duration: 131.639117ms)","trace[1095917297] 'applied index is now lower than readState.Index'  (duration: 161.4µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:08:46.932831Z","caller":"traceutil/trace.go:171","msg":"trace[219276231] transaction","detail":"{read_only:false; response_revision:1084; number_of_response:1; }","duration":"225.630773ms","start":"2024-08-31T22:08:46.707192Z","end":"2024-08-31T22:08:46.932823Z","steps":["trace[219276231] 'process raft request'  (duration: 225.308004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.268644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933105Z","caller":"traceutil/trace.go:171","msg":"trace[850287395] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1084; }","duration":"132.320492ms","start":"2024-08-31T22:08:46.800778Z","end":"2024-08-31T22:08:46.933098Z","steps":["trace[850287395] 'agreement among raft nodes before linearized reading'  (duration: 132.252602ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.180196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933247Z","caller":"traceutil/trace.go:171","msg":"trace[660106792] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1084; }","duration":"123.218846ms","start":"2024-08-31T22:08:46.810023Z","end":"2024-08-31T22:08:46.933242Z","steps":["trace[660106792] 'agreement among raft nodes before linearized reading'  (duration: 123.16896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:08:46.933583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.542858ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:46.933623Z","caller":"traceutil/trace.go:171","msg":"trace[330872549] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1084; }","duration":"108.584553ms","start":"2024-08-31T22:08:46.825032Z","end":"2024-08-31T22:08:46.933616Z","steps":["trace[330872549] 'agreement among raft nodes before linearized reading'  (duration: 108.535322ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:49.655357Z","caller":"traceutil/trace.go:171","msg":"trace[165319690] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"136.350729ms","start":"2024-08-31T22:08:49.518991Z","end":"2024-08-31T22:08:49.655342Z","steps":["trace[165319690] 'process raft request'  (duration: 136.128055ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:49.661370Z","caller":"traceutil/trace.go:171","msg":"trace[1593117983] transaction","detail":"{read_only:false; response_revision:1101; number_of_response:1; }","duration":"135.493651ms","start":"2024-08-31T22:08:49.525861Z","end":"2024-08-31T22:08:49.661354Z","steps":["trace[1593117983] 'process raft request'  (duration: 134.988688ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:08:54.136074Z","caller":"traceutil/trace.go:171","msg":"trace[1104677073] linearizableReadLoop","detail":"{readStateIndex:1165; appliedIndex:1164; }","duration":"172.969109ms","start":"2024-08-31T22:08:53.963035Z","end":"2024-08-31T22:08:54.136004Z","steps":["trace[1104677073] 'read index received'  (duration: 170.41125ms)","trace[1104677073] 'applied index is now lower than readState.Index'  (duration: 2.557067ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-31T22:08:54.136319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.226891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:08:54.136413Z","caller":"traceutil/trace.go:171","msg":"trace[851686441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"173.346413ms","start":"2024-08-31T22:08:53.963007Z","end":"2024-08-31T22:08:54.136353Z","steps":["trace[851686441] 'agreement among raft nodes before linearized reading'  (duration: 173.201927ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:09:11.180801Z","caller":"traceutil/trace.go:171","msg":"trace[143927082] linearizableReadLoop","detail":"{readStateIndex:1232; appliedIndex:1231; }","duration":"217.79961ms","start":"2024-08-31T22:09:10.962974Z","end":"2024-08-31T22:09:11.180774Z","steps":["trace[143927082] 'read index received'  (duration: 217.657091ms)","trace[143927082] 'applied index is now lower than readState.Index'  (duration: 142.006µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:09:11.180954Z","caller":"traceutil/trace.go:171","msg":"trace[41968220] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"247.07813ms","start":"2024-08-31T22:09:10.933868Z","end":"2024-08-31T22:09:11.180946Z","steps":["trace[41968220] 'process raft request'  (duration: 246.800851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:09:11.181156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.482027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"warn","ts":"2024-08-31T22:09:11.181231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.247568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:09:11.181305Z","caller":"traceutil/trace.go:171","msg":"trace[497721371] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1196; }","duration":"218.327497ms","start":"2024-08-31T22:09:10.962970Z","end":"2024-08-31T22:09:11.181277Z","steps":["trace[497721371] 'agreement among raft nodes before linearized reading'  (duration: 218.228122ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:09:11.181240Z","caller":"traceutil/trace.go:171","msg":"trace[450022890] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1196; }","duration":"132.57275ms","start":"2024-08-31T22:09:11.048648Z","end":"2024-08-31T22:09:11.181221Z","steps":["trace[450022890] 'agreement among raft nodes before linearized reading'  (duration: 132.417556ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:17:14.568202Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1526}
	{"level":"info","ts":"2024-08-31T22:17:14.607762Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1526,"took":"38.547549ms","hash":33265301,"current-db-size-bytes":6266880,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3313664,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-08-31T22:17:14.607883Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":33265301,"revision":1526,"compact-revision":-1}
	{"level":"info","ts":"2024-08-31T22:22:14.575132Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1946}
	{"level":"info","ts":"2024-08-31T22:22:14.595823Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1946,"took":"19.810227ms","hash":4216937896,"current-db-size-bytes":6393856,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":4976640,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-08-31T22:22:14.595952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4216937896,"revision":1946,"compact-revision":1526}
	
	
	==> gcp-auth [a5e788d23e62874ea50192efd8131ed3aab2b28a4bb06ccad1066036599d8da0] <==
	2024/08/31 22:09:59 Ready to write response ...
	2024/08/31 22:18:02 Ready to marshal response ...
	2024/08/31 22:18:02 Ready to write response ...
	2024/08/31 22:18:02 Ready to marshal response ...
	2024/08/31 22:18:02 Ready to write response ...
	2024/08/31 22:18:13 Ready to marshal response ...
	2024/08/31 22:18:13 Ready to write response ...
	2024/08/31 22:18:14 Ready to marshal response ...
	2024/08/31 22:18:14 Ready to write response ...
	2024/08/31 22:18:18 Ready to marshal response ...
	2024/08/31 22:18:18 Ready to write response ...
	2024/08/31 22:18:38 Ready to marshal response ...
	2024/08/31 22:18:38 Ready to write response ...
	2024/08/31 22:18:59 Ready to marshal response ...
	2024/08/31 22:18:59 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:06 Ready to marshal response ...
	2024/08/31 22:19:06 Ready to write response ...
	2024/08/31 22:19:10 Ready to marshal response ...
	2024/08/31 22:19:10 Ready to write response ...
	2024/08/31 22:21:36 Ready to marshal response ...
	2024/08/31 22:21:36 Ready to write response ...
	
	
	==> kernel <==
	 22:23:05 up 16 min,  0 users,  load average: 0.16, 0.38, 0.41
	Linux addons-132210 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5a663020090207c887152c0a251e565d398aeb4e4eefefef5c766993d0ac887] <==
	E0831 22:08:55.517599       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0831 22:08:55.517717       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.101.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.101.143:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I0831 22:08:55.540025       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0831 22:18:30.350419       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0831 22:18:31.825010       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:18:54.356239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.356735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.443718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.443780       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.469865       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.470384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:18:54.501684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:18:54.501737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:18:55.472783       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:18:55.502100       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0831 22:18:55.519265       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0831 22:19:04.668043       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0831 22:19:05.793178       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0831 22:19:06.516419       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.208.123"}
	I0831 22:19:10.568572       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0831 22:19:10.763197       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.55.157"}
	I0831 22:21:36.578157       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.3.72"}
	
	
	==> kube-controller-manager [88f24112cdf2eace919ddda67875d06a0889e4a617df37cc920ca6d22fc2b22e] <==
	I0831 22:21:36.427538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.114721ms"
	I0831 22:21:36.441530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.906978ms"
	I0831 22:21:36.441603       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.2µs"
	I0831 22:21:38.498964       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0831 22:21:38.503343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.917µs"
	I0831 22:21:38.507293       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0831 22:21:39.905599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.138637ms"
	I0831 22:21:39.907724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="92.839µs"
	I0831 22:21:48.562653       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0831 22:21:48.714015       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:21:48.714073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:21:57.875286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-132210"
	W0831 22:22:06.919646       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:22:06.919715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:22:11.362386       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:22:11.362638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:22:31.210845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:22:31.210974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:22:38.875594       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:22:38.875651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:22:50.980980       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:22:50.981017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:22:59.001679       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:22:59.001789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:23:03.846073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="13.038µs"
	
	
	==> kube-proxy [dc9d1779c9ec008b142aace86f836e1cf2ba761641d43d7111ef356716d9148c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:07:25.903033       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:07:25.911310       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0831 22:07:25.911403       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:07:25.982344       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:07:25.982403       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:07:25.982435       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:07:25.985880       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:07:25.986197       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:07:25.986208       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:07:25.987987       1 config.go:197] "Starting service config controller"
	I0831 22:07:25.988004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:07:25.988023       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:07:25.988027       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:07:25.988362       1 config.go:326] "Starting node config controller"
	I0831 22:07:25.988369       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:07:26.089133       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:07:26.089163       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:07:26.089183       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ea40b4dfb934e0828bc6d8305cd4319fdf994d138f4a793ce6772c35520118da] <==
	E0831 22:07:16.254732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0831 22:07:16.241012       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:07:17.051102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:07:17.051135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.097676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:07:17.097729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.116710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:07:17.116759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.238680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:07:17.238731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.308444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:07:17.308680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.361218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:07:17.361749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.445778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:07:17.445880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.451014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:07:17.451126       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:07:17.464610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:07:17.464787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.482630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:07:17.482757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:07:17.545180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:07:17.545318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:07:19.433315       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:22:19 addons-132210 kubelet[1197]: I0831 22:22:19.268064    1197 scope.go:117] "RemoveContainer" containerID="03905d71943c4e651e76ae1ff5dcce37d478d42828a721077cce0afb0b52765d"
	Aug 31 22:22:19 addons-132210 kubelet[1197]: I0831 22:22:19.292191    1197 scope.go:117] "RemoveContainer" containerID="833daa1d9c053b650bff72b5cb767f37b4713ecb695275d52527dfe370109c18"
	Aug 31 22:22:24 addons-132210 kubelet[1197]: E0831 22:22:24.671303    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4d4c7d4f-e101-4a1a-8b8f-6d8a0cd8de3f"
	Aug 31 22:22:29 addons-132210 kubelet[1197]: E0831 22:22:29.033970    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142949033370378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:22:29 addons-132210 kubelet[1197]: E0831 22:22:29.034262    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142949033370378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:22:36 addons-132210 kubelet[1197]: E0831 22:22:36.672133    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4d4c7d4f-e101-4a1a-8b8f-6d8a0cd8de3f"
	Aug 31 22:22:39 addons-132210 kubelet[1197]: E0831 22:22:39.037052    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142959036498144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:22:39 addons-132210 kubelet[1197]: E0831 22:22:39.037077    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142959036498144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:22:48 addons-132210 kubelet[1197]: E0831 22:22:48.673297    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4d4c7d4f-e101-4a1a-8b8f-6d8a0cd8de3f"
	Aug 31 22:22:49 addons-132210 kubelet[1197]: E0831 22:22:49.040080    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142969039588134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:22:49 addons-132210 kubelet[1197]: E0831 22:22:49.040107    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142969039588134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:22:59 addons-132210 kubelet[1197]: E0831 22:22:59.042332    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142979041977030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:22:59 addons-132210 kubelet[1197]: E0831 22:22:59.042594    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725142979041977030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579779,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:23:01 addons-132210 kubelet[1197]: E0831 22:23:01.671442    1197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4d4c7d4f-e101-4a1a-8b8f-6d8a0cd8de3f"
	Aug 31 22:23:03 addons-132210 kubelet[1197]: I0831 22:23:03.871557    1197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-bh4sk" podStartSLOduration=85.371421357 podStartE2EDuration="1m27.871530886s" podCreationTimestamp="2024-08-31 22:21:36 +0000 UTC" firstStartedPulling="2024-08-31 22:21:36.977386119 +0000 UTC m=+858.473347622" lastFinishedPulling="2024-08-31 22:21:39.477495641 +0000 UTC m=+860.973457151" observedRunningTime="2024-08-31 22:21:39.894967091 +0000 UTC m=+861.390928614" watchObservedRunningTime="2024-08-31 22:23:03.871530886 +0000 UTC m=+945.367492409"
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.251035    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtj4\" (UniqueName: \"kubernetes.io/projected/9f5c8bca-8c7c-4216-b875-066e9a9fb36a-kube-api-access-fhtj4\") pod \"9f5c8bca-8c7c-4216-b875-066e9a9fb36a\" (UID: \"9f5c8bca-8c7c-4216-b875-066e9a9fb36a\") "
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.251077    1197 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f5c8bca-8c7c-4216-b875-066e9a9fb36a-tmp-dir\") pod \"9f5c8bca-8c7c-4216-b875-066e9a9fb36a\" (UID: \"9f5c8bca-8c7c-4216-b875-066e9a9fb36a\") "
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.252259    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f5c8bca-8c7c-4216-b875-066e9a9fb36a-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9f5c8bca-8c7c-4216-b875-066e9a9fb36a" (UID: "9f5c8bca-8c7c-4216-b875-066e9a9fb36a"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.261537    1197 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5c8bca-8c7c-4216-b875-066e9a9fb36a-kube-api-access-fhtj4" (OuterVolumeSpecName: "kube-api-access-fhtj4") pod "9f5c8bca-8c7c-4216-b875-066e9a9fb36a" (UID: "9f5c8bca-8c7c-4216-b875-066e9a9fb36a"). InnerVolumeSpecName "kube-api-access-fhtj4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.266045    1197 scope.go:117] "RemoveContainer" containerID="7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb"
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.309492    1197 scope.go:117] "RemoveContainer" containerID="7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb"
	Aug 31 22:23:05 addons-132210 kubelet[1197]: E0831 22:23:05.310074    1197 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb\": container with ID starting with 7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb not found: ID does not exist" containerID="7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb"
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.310126    1197 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb"} err="failed to get container status \"7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb\": rpc error: code = NotFound desc = could not find container \"7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb\": container with ID starting with 7ef4a6c40dbe3a8d531cf986dcea14a7bbc6ae630c5b4166e666407a78dc57eb not found: ID does not exist"
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.352155    1197 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fhtj4\" (UniqueName: \"kubernetes.io/projected/9f5c8bca-8c7c-4216-b875-066e9a9fb36a-kube-api-access-fhtj4\") on node \"addons-132210\" DevicePath \"\""
	Aug 31 22:23:05 addons-132210 kubelet[1197]: I0831 22:23:05.352200    1197 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f5c8bca-8c7c-4216-b875-066e9a9fb36a-tmp-dir\") on node \"addons-132210\" DevicePath \"\""
	
	
	==> storage-provisioner [0b70bc07a6feca32dfee0e626a7ed1a81667de088741b28865f43564c8fec31e] <==
	I0831 22:07:33.356182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:07:33.426579       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:07:33.426654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:07:33.847351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:07:33.848726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5!
	I0831 22:07:33.850075       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8a1f8e-16e7-4a54-81fb-1116caaffa55", APIVersion:"v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5 became leader
	I0831 22:07:33.951304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-132210_611ba034-ea36-4e1e-9c7a-33dfa80263a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-132210 -n addons-132210
helpers_test.go:262: (dbg) Run:  kubectl --context addons-132210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-132210 describe pod busybox
helpers_test.go:283: (dbg) kubectl --context addons-132210 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-132210/192.168.39.12
	Start Time:       Sat, 31 Aug 2024 22:09:59 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzs9l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wzs9l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  13m                  default-scheduler  Successfully assigned default/busybox to addons-132210
	  Normal   Pulling    11m (x4 over 13m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)    kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m1s (x44 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:286: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (286.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 node stop m02 -v=7 --alsologtostderr
E0831 22:32:23.523586   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:33:04.485865   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.459808911s)

                                                
                                                
-- stdout --
	* Stopping node "ha-957517-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:32:20.706682   36445 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:32:20.706827   36445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:20.706839   36445 out.go:358] Setting ErrFile to fd 2...
	I0831 22:32:20.706844   36445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:20.707526   36445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:32:20.708253   36445 mustload.go:65] Loading cluster: ha-957517
	I0831 22:32:20.708687   36445 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:32:20.708703   36445 stop.go:39] StopHost: ha-957517-m02
	I0831 22:32:20.709086   36445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:32:20.709140   36445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:32:20.724564   36445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0831 22:32:20.725070   36445 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:32:20.725593   36445 main.go:141] libmachine: Using API Version  1
	I0831 22:32:20.725612   36445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:32:20.725891   36445 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:32:20.728040   36445 out.go:177] * Stopping node "ha-957517-m02"  ...
	I0831 22:32:20.729158   36445 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0831 22:32:20.729177   36445 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:32:20.729371   36445 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0831 22:32:20.729389   36445 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:32:20.732267   36445 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:32:20.732643   36445 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:32:20.732673   36445 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:32:20.732809   36445 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:32:20.732970   36445 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:32:20.733122   36445 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:32:20.733244   36445 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:32:20.823953   36445 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0831 22:32:20.878722   36445 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0831 22:32:20.934243   36445 main.go:141] libmachine: Stopping "ha-957517-m02"...
	I0831 22:32:20.934284   36445 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:32:20.935890   36445 main.go:141] libmachine: (ha-957517-m02) Calling .Stop
	I0831 22:32:20.938936   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 0/120
	I0831 22:32:21.940168   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 1/120
	I0831 22:32:22.941726   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 2/120
	I0831 22:32:23.943004   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 3/120
	I0831 22:32:24.944310   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 4/120
	I0831 22:32:25.946135   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 5/120
	I0831 22:32:26.947557   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 6/120
	I0831 22:32:27.949624   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 7/120
	I0831 22:32:28.951008   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 8/120
	I0831 22:32:29.952248   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 9/120
	I0831 22:32:30.954289   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 10/120
	I0831 22:32:31.956798   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 11/120
	I0831 22:32:32.958420   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 12/120
	I0831 22:32:33.959716   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 13/120
	I0831 22:32:34.961640   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 14/120
	I0831 22:32:35.963353   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 15/120
	I0831 22:32:36.964639   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 16/120
	I0831 22:32:37.965831   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 17/120
	I0831 22:32:38.967242   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 18/120
	I0831 22:32:39.968479   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 19/120
	I0831 22:32:40.970729   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 20/120
	I0831 22:32:41.971997   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 21/120
	I0831 22:32:42.973693   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 22/120
	I0831 22:32:43.974959   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 23/120
	I0831 22:32:44.976099   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 24/120
	I0831 22:32:45.977912   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 25/120
	I0831 22:32:46.979100   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 26/120
	I0831 22:32:47.980356   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 27/120
	I0831 22:32:48.981637   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 28/120
	I0831 22:32:49.983703   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 29/120
	I0831 22:32:50.985734   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 30/120
	I0831 22:32:51.987070   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 31/120
	I0831 22:32:52.988491   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 32/120
	I0831 22:32:53.989926   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 33/120
	I0831 22:32:54.991377   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 34/120
	I0831 22:32:55.992662   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 35/120
	I0831 22:32:56.994160   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 36/120
	I0831 22:32:57.995585   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 37/120
	I0831 22:32:58.997971   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 38/120
	I0831 22:32:59.999354   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 39/120
	I0831 22:33:01.001475   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 40/120
	I0831 22:33:02.002757   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 41/120
	I0831 22:33:03.004824   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 42/120
	I0831 22:33:04.006036   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 43/120
	I0831 22:33:05.007213   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 44/120
	I0831 22:33:06.009007   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 45/120
	I0831 22:33:07.010335   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 46/120
	I0831 22:33:08.011627   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 47/120
	I0831 22:33:09.013996   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 48/120
	I0831 22:33:10.015643   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 49/120
	I0831 22:33:11.017535   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 50/120
	I0831 22:33:12.019613   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 51/120
	I0831 22:33:13.021905   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 52/120
	I0831 22:33:14.023425   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 53/120
	I0831 22:33:15.024828   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 54/120
	I0831 22:33:16.026889   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 55/120
	I0831 22:33:17.028155   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 56/120
	I0831 22:33:18.029802   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 57/120
	I0831 22:33:19.032045   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 58/120
	I0831 22:33:20.033288   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 59/120
	I0831 22:33:21.035159   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 60/120
	I0831 22:33:22.036322   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 61/120
	I0831 22:33:23.037718   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 62/120
	I0831 22:33:24.038947   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 63/120
	I0831 22:33:25.040263   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 64/120
	I0831 22:33:26.042005   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 65/120
	I0831 22:33:27.043159   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 66/120
	I0831 22:33:28.044371   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 67/120
	I0831 22:33:29.045789   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 68/120
	I0831 22:33:30.047571   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 69/120
	I0831 22:33:31.049707   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 70/120
	I0831 22:33:32.050929   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 71/120
	I0831 22:33:33.052405   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 72/120
	I0831 22:33:34.053611   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 73/120
	I0831 22:33:35.054852   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 74/120
	I0831 22:33:36.056635   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 75/120
	I0831 22:33:37.057903   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 76/120
	I0831 22:33:38.059763   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 77/120
	I0831 22:33:39.061630   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 78/120
	I0831 22:33:40.063028   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 79/120
	I0831 22:33:41.065051   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 80/120
	I0831 22:33:42.066374   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 81/120
	I0831 22:33:43.067711   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 82/120
	I0831 22:33:44.069601   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 83/120
	I0831 22:33:45.070846   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 84/120
	I0831 22:33:46.072730   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 85/120
	I0831 22:33:47.074360   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 86/120
	I0831 22:33:48.075558   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 87/120
	I0831 22:33:49.076823   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 88/120
	I0831 22:33:50.078154   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 89/120
	I0831 22:33:51.080089   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 90/120
	I0831 22:33:52.081664   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 91/120
	I0831 22:33:53.082909   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 92/120
	I0831 22:33:54.084530   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 93/120
	I0831 22:33:55.086060   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 94/120
	I0831 22:33:56.087924   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 95/120
	I0831 22:33:57.089675   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 96/120
	I0831 22:33:58.091033   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 97/120
	I0831 22:33:59.092612   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 98/120
	I0831 22:34:00.094040   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 99/120
	I0831 22:34:01.096241   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 100/120
	I0831 22:34:02.098108   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 101/120
	I0831 22:34:03.099228   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 102/120
	I0831 22:34:04.100657   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 103/120
	I0831 22:34:05.101865   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 104/120
	I0831 22:34:06.103365   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 105/120
	I0831 22:34:07.104583   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 106/120
	I0831 22:34:08.106200   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 107/120
	I0831 22:34:09.107644   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 108/120
	I0831 22:34:10.109248   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 109/120
	I0831 22:34:11.111360   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 110/120
	I0831 22:34:12.113275   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 111/120
	I0831 22:34:13.115148   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 112/120
	I0831 22:34:14.116557   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 113/120
	I0831 22:34:15.117970   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 114/120
	I0831 22:34:16.119901   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 115/120
	I0831 22:34:17.122066   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 116/120
	I0831 22:34:18.123442   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 117/120
	I0831 22:34:19.124642   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 118/120
	I0831 22:34:20.125951   36445 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 119/120
	I0831 22:34:21.126502   36445 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0831 22:34:21.126629   36445 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-957517 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
E0831 22:34:26.407180   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 3 (19.212348265s)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:34:21.167553   36872 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:34:21.167794   36872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:21.167804   36872 out.go:358] Setting ErrFile to fd 2...
	I0831 22:34:21.167808   36872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:21.167961   36872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:34:21.168118   36872 out.go:352] Setting JSON to false
	I0831 22:34:21.168140   36872 mustload.go:65] Loading cluster: ha-957517
	I0831 22:34:21.168195   36872 notify.go:220] Checking for updates...
	I0831 22:34:21.168643   36872 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:34:21.168664   36872 status.go:255] checking status of ha-957517 ...
	I0831 22:34:21.169208   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:21.169314   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:21.188808   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0831 22:34:21.189239   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:21.189884   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:21.189919   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:21.190216   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:21.190385   36872 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:34:21.191827   36872 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:34:21.191839   36872 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:21.192131   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:21.192165   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:21.206527   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0831 22:34:21.206914   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:21.207389   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:21.207409   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:21.207699   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:21.207852   36872 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:34:21.210628   36872 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:21.211071   36872 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:21.211098   36872 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:21.211237   36872 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:21.211550   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:21.211589   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:21.226471   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39109
	I0831 22:34:21.226855   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:21.227352   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:21.227377   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:21.227679   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:21.227847   36872 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:34:21.228022   36872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:21.228055   36872 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:34:21.230690   36872 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:21.231144   36872 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:21.231161   36872 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:21.231364   36872 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:34:21.231493   36872 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:34:21.231610   36872 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:34:21.231720   36872 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:34:21.316946   36872 ssh_runner.go:195] Run: systemctl --version
	I0831 22:34:21.325290   36872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:21.356723   36872 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:34:21.356758   36872 api_server.go:166] Checking apiserver status ...
	I0831 22:34:21.356788   36872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:34:21.375499   36872 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:34:21.391746   36872 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:34:21.391806   36872 ssh_runner.go:195] Run: ls
	I0831 22:34:21.400237   36872 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:34:21.404518   36872 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:34:21.404539   36872 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:34:21.404548   36872 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:34:21.404563   36872 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:34:21.404852   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:21.404882   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:21.420348   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0831 22:34:21.420759   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:21.421230   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:21.421251   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:21.421558   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:21.421735   36872 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:34:21.423210   36872 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:34:21.423224   36872 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:21.423576   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:21.423614   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:21.441635   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0831 22:34:21.442148   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:21.442634   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:21.442655   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:21.443066   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:21.443237   36872 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:34:21.445757   36872 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:21.446092   36872 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:21.446129   36872 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:21.446241   36872 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:21.446674   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:21.446710   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:21.461380   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0831 22:34:21.461824   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:21.462275   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:21.462297   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:21.462556   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:21.462701   36872 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:34:21.462871   36872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:21.462888   36872 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:34:21.465477   36872 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:21.465848   36872 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:21.465871   36872 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:21.466004   36872 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:34:21.466179   36872 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:34:21.466329   36872 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:34:21.466461   36872 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	W0831 22:34:39.971583   36872 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:34:39.971687   36872 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0831 22:34:39.971708   36872 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:39.971720   36872 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 22:34:39.971742   36872 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:39.971752   36872 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:34:39.972056   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:39.972103   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:39.986482   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0831 22:34:39.986875   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:39.987302   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:39.987337   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:39.987644   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:39.987819   36872 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:34:39.989376   36872 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:34:39.989392   36872 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:34:39.989785   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:39.989826   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:40.004315   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0831 22:34:40.004779   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:40.005217   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:40.005240   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:40.005511   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:40.005713   36872 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:34:40.008530   36872 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:40.008943   36872 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:34:40.008975   36872 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:40.009115   36872 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:34:40.009421   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:40.009458   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:40.023887   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0831 22:34:40.024340   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:40.024770   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:40.024787   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:40.025060   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:40.025232   36872 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:34:40.025416   36872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:40.025439   36872 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:34:40.027852   36872 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:40.028258   36872 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:34:40.028282   36872 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:40.028460   36872 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:34:40.028605   36872 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:34:40.028748   36872 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:34:40.028873   36872 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:34:40.108509   36872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:40.128264   36872 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:34:40.128289   36872 api_server.go:166] Checking apiserver status ...
	I0831 22:34:40.128336   36872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:34:40.151756   36872 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:34:40.167020   36872 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:34:40.167079   36872 ssh_runner.go:195] Run: ls
	I0831 22:34:40.171760   36872 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:34:40.176148   36872 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:34:40.176184   36872 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:34:40.176196   36872 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:34:40.176216   36872 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:34:40.176516   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:40.176549   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:40.191401   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0831 22:34:40.191840   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:40.192310   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:40.192334   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:40.192650   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:40.192840   36872 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:34:40.194301   36872 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:34:40.194316   36872 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:34:40.194627   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:40.194672   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:40.209523   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38469
	I0831 22:34:40.209945   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:40.210412   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:40.210431   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:40.210777   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:40.210978   36872 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:34:40.213944   36872 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:40.214395   36872 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:34:40.214415   36872 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:40.214559   36872 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:34:40.214863   36872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:40.214896   36872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:40.229603   36872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0831 22:34:40.230030   36872 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:40.230493   36872 main.go:141] libmachine: Using API Version  1
	I0831 22:34:40.230512   36872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:40.230798   36872 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:40.230970   36872 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:34:40.231136   36872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:40.231155   36872 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:34:40.234035   36872 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:40.234494   36872 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:34:40.234530   36872 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:40.234657   36872 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:34:40.234810   36872 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:34:40.234956   36872 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:34:40.235066   36872 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:34:40.320870   36872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:40.338496   36872 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-957517 -n ha-957517
helpers_test.go:245: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p ha-957517 logs -n 25: (1.462220779s)
helpers_test.go:253: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517:/home/docker/cp-test_ha-957517-m03_ha-957517.txt                       |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517 sudo cat                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517.txt                                 |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m04 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp testdata/cp-test.txt                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517:/home/docker/cp-test_ha-957517-m04_ha-957517.txt                       |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517 sudo cat                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517.txt                                 |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03:/home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m03 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-957517 node stop m02 -v=7                                                     | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:27:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:27:40.945802   32390 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:27:40.946098   32390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:27:40.946108   32390 out.go:358] Setting ErrFile to fd 2...
	I0831 22:27:40.946113   32390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:27:40.946301   32390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:27:40.946906   32390 out.go:352] Setting JSON to false
	I0831 22:27:40.947799   32390 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4208,"bootTime":1725139053,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:27:40.947860   32390 start.go:139] virtualization: kvm guest
	I0831 22:27:40.950113   32390 out.go:177] * [ha-957517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:27:40.951503   32390 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:27:40.951553   32390 notify.go:220] Checking for updates...
	I0831 22:27:40.953810   32390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:27:40.955161   32390 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:27:40.956489   32390 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:27:40.957570   32390 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:27:40.958683   32390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:27:40.959945   32390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:27:40.994663   32390 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 22:27:40.995889   32390 start.go:297] selected driver: kvm2
	I0831 22:27:40.995904   32390 start.go:901] validating driver "kvm2" against <nil>
	I0831 22:27:40.995914   32390 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:27:40.996570   32390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:27:40.996662   32390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:27:41.011574   32390 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:27:41.011620   32390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:27:41.011870   32390 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:27:41.011898   32390 cni.go:84] Creating CNI manager for ""
	I0831 22:27:41.011904   32390 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0831 22:27:41.011910   32390 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:27:41.011960   32390 start.go:340] cluster config:
	{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0831 22:27:41.012059   32390 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:27:41.013820   32390 out.go:177] * Starting "ha-957517" primary control-plane node in "ha-957517" cluster
	I0831 22:27:41.015021   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:27:41.015059   32390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:27:41.015076   32390 cache.go:56] Caching tarball of preloaded images
	I0831 22:27:41.015179   32390 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:27:41.015193   32390 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:27:41.015592   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:27:41.015616   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json: {Name:mkff77987e3b2e05fabfb3dbe17ba9d399f610a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:27:41.015772   32390 start.go:360] acquireMachinesLock for ha-957517: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:27:41.015808   32390 start.go:364] duration metric: took 16.854µs to acquireMachinesLock for "ha-957517"
	I0831 22:27:41.015824   32390 start.go:93] Provisioning new machine with config: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:27:41.015873   32390 start.go:125] createHost starting for "" (driver="kvm2")
	I0831 22:27:41.017441   32390 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 22:27:41.017559   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:27:41.017595   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:27:41.031812   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0831 22:27:41.032223   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:27:41.032790   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:27:41.032813   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:27:41.033097   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:27:41.033258   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:27:41.033483   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:27:41.033646   32390 start.go:159] libmachine.API.Create for "ha-957517" (driver="kvm2")
	I0831 22:27:41.033711   32390 client.go:168] LocalClient.Create starting
	I0831 22:27:41.033744   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:27:41.033772   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:27:41.033785   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:27:41.033833   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:27:41.033851   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:27:41.033865   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:27:41.033882   32390 main.go:141] libmachine: Running pre-create checks...
	I0831 22:27:41.033891   32390 main.go:141] libmachine: (ha-957517) Calling .PreCreateCheck
	I0831 22:27:41.034216   32390 main.go:141] libmachine: (ha-957517) Calling .GetConfigRaw
	I0831 22:27:41.034559   32390 main.go:141] libmachine: Creating machine...
	I0831 22:27:41.034577   32390 main.go:141] libmachine: (ha-957517) Calling .Create
	I0831 22:27:41.034714   32390 main.go:141] libmachine: (ha-957517) Creating KVM machine...
	I0831 22:27:41.035870   32390 main.go:141] libmachine: (ha-957517) DBG | found existing default KVM network
	I0831 22:27:41.036537   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.036401   32413 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0831 22:27:41.036573   32390 main.go:141] libmachine: (ha-957517) DBG | created network xml: 
	I0831 22:27:41.036593   32390 main.go:141] libmachine: (ha-957517) DBG | <network>
	I0831 22:27:41.036601   32390 main.go:141] libmachine: (ha-957517) DBG |   <name>mk-ha-957517</name>
	I0831 22:27:41.036611   32390 main.go:141] libmachine: (ha-957517) DBG |   <dns enable='no'/>
	I0831 22:27:41.036634   32390 main.go:141] libmachine: (ha-957517) DBG |   
	I0831 22:27:41.036671   32390 main.go:141] libmachine: (ha-957517) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0831 22:27:41.036685   32390 main.go:141] libmachine: (ha-957517) DBG |     <dhcp>
	I0831 22:27:41.036697   32390 main.go:141] libmachine: (ha-957517) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0831 22:27:41.036713   32390 main.go:141] libmachine: (ha-957517) DBG |     </dhcp>
	I0831 22:27:41.036728   32390 main.go:141] libmachine: (ha-957517) DBG |   </ip>
	I0831 22:27:41.036777   32390 main.go:141] libmachine: (ha-957517) DBG |   
	I0831 22:27:41.036799   32390 main.go:141] libmachine: (ha-957517) DBG | </network>
	I0831 22:27:41.036812   32390 main.go:141] libmachine: (ha-957517) DBG | 
	I0831 22:27:41.041570   32390 main.go:141] libmachine: (ha-957517) DBG | trying to create private KVM network mk-ha-957517 192.168.39.0/24...
	I0831 22:27:41.113674   32390 main.go:141] libmachine: (ha-957517) DBG | private KVM network mk-ha-957517 192.168.39.0/24 created
	I0831 22:27:41.113698   32390 main.go:141] libmachine: (ha-957517) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517 ...
	I0831 22:27:41.113715   32390 main.go:141] libmachine: (ha-957517) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:27:41.113758   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.113687   32413 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:27:41.113858   32390 main.go:141] libmachine: (ha-957517) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:27:41.352403   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.352292   32413 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa...
	I0831 22:27:41.479076   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.478918   32413 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/ha-957517.rawdisk...
	I0831 22:27:41.479112   32390 main.go:141] libmachine: (ha-957517) DBG | Writing magic tar header
	I0831 22:27:41.479128   32390 main.go:141] libmachine: (ha-957517) DBG | Writing SSH key tar header
	I0831 22:27:41.479141   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.479036   32413 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517 ...
	I0831 22:27:41.479154   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517
	I0831 22:27:41.479160   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:27:41.479175   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:27:41.479182   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:27:41.479196   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517 (perms=drwx------)
	I0831 22:27:41.479206   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:27:41.479221   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:27:41.479232   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home
	I0831 22:27:41.479244   32390 main.go:141] libmachine: (ha-957517) DBG | Skipping /home - not owner
	I0831 22:27:41.479254   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:27:41.479260   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:27:41.479270   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:27:41.479278   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:27:41.479289   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:27:41.479297   32390 main.go:141] libmachine: (ha-957517) Creating domain...
	I0831 22:27:41.480432   32390 main.go:141] libmachine: (ha-957517) define libvirt domain using xml: 
	I0831 22:27:41.480461   32390 main.go:141] libmachine: (ha-957517) <domain type='kvm'>
	I0831 22:27:41.480472   32390 main.go:141] libmachine: (ha-957517)   <name>ha-957517</name>
	I0831 22:27:41.480479   32390 main.go:141] libmachine: (ha-957517)   <memory unit='MiB'>2200</memory>
	I0831 22:27:41.480488   32390 main.go:141] libmachine: (ha-957517)   <vcpu>2</vcpu>
	I0831 22:27:41.480501   32390 main.go:141] libmachine: (ha-957517)   <features>
	I0831 22:27:41.480510   32390 main.go:141] libmachine: (ha-957517)     <acpi/>
	I0831 22:27:41.480520   32390 main.go:141] libmachine: (ha-957517)     <apic/>
	I0831 22:27:41.480528   32390 main.go:141] libmachine: (ha-957517)     <pae/>
	I0831 22:27:41.480549   32390 main.go:141] libmachine: (ha-957517)     
	I0831 22:27:41.480558   32390 main.go:141] libmachine: (ha-957517)   </features>
	I0831 22:27:41.480566   32390 main.go:141] libmachine: (ha-957517)   <cpu mode='host-passthrough'>
	I0831 22:27:41.480596   32390 main.go:141] libmachine: (ha-957517)   
	I0831 22:27:41.480617   32390 main.go:141] libmachine: (ha-957517)   </cpu>
	I0831 22:27:41.480629   32390 main.go:141] libmachine: (ha-957517)   <os>
	I0831 22:27:41.480640   32390 main.go:141] libmachine: (ha-957517)     <type>hvm</type>
	I0831 22:27:41.480651   32390 main.go:141] libmachine: (ha-957517)     <boot dev='cdrom'/>
	I0831 22:27:41.480660   32390 main.go:141] libmachine: (ha-957517)     <boot dev='hd'/>
	I0831 22:27:41.480666   32390 main.go:141] libmachine: (ha-957517)     <bootmenu enable='no'/>
	I0831 22:27:41.480673   32390 main.go:141] libmachine: (ha-957517)   </os>
	I0831 22:27:41.480680   32390 main.go:141] libmachine: (ha-957517)   <devices>
	I0831 22:27:41.480692   32390 main.go:141] libmachine: (ha-957517)     <disk type='file' device='cdrom'>
	I0831 22:27:41.480708   32390 main.go:141] libmachine: (ha-957517)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/boot2docker.iso'/>
	I0831 22:27:41.480723   32390 main.go:141] libmachine: (ha-957517)       <target dev='hdc' bus='scsi'/>
	I0831 22:27:41.480742   32390 main.go:141] libmachine: (ha-957517)       <readonly/>
	I0831 22:27:41.480755   32390 main.go:141] libmachine: (ha-957517)     </disk>
	I0831 22:27:41.480769   32390 main.go:141] libmachine: (ha-957517)     <disk type='file' device='disk'>
	I0831 22:27:41.480781   32390 main.go:141] libmachine: (ha-957517)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:27:41.480796   32390 main.go:141] libmachine: (ha-957517)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/ha-957517.rawdisk'/>
	I0831 22:27:41.480808   32390 main.go:141] libmachine: (ha-957517)       <target dev='hda' bus='virtio'/>
	I0831 22:27:41.480817   32390 main.go:141] libmachine: (ha-957517)     </disk>
	I0831 22:27:41.480826   32390 main.go:141] libmachine: (ha-957517)     <interface type='network'>
	I0831 22:27:41.480855   32390 main.go:141] libmachine: (ha-957517)       <source network='mk-ha-957517'/>
	I0831 22:27:41.480878   32390 main.go:141] libmachine: (ha-957517)       <model type='virtio'/>
	I0831 22:27:41.480888   32390 main.go:141] libmachine: (ha-957517)     </interface>
	I0831 22:27:41.480898   32390 main.go:141] libmachine: (ha-957517)     <interface type='network'>
	I0831 22:27:41.480911   32390 main.go:141] libmachine: (ha-957517)       <source network='default'/>
	I0831 22:27:41.480922   32390 main.go:141] libmachine: (ha-957517)       <model type='virtio'/>
	I0831 22:27:41.480933   32390 main.go:141] libmachine: (ha-957517)     </interface>
	I0831 22:27:41.480944   32390 main.go:141] libmachine: (ha-957517)     <serial type='pty'>
	I0831 22:27:41.480962   32390 main.go:141] libmachine: (ha-957517)       <target port='0'/>
	I0831 22:27:41.480978   32390 main.go:141] libmachine: (ha-957517)     </serial>
	I0831 22:27:41.480999   32390 main.go:141] libmachine: (ha-957517)     <console type='pty'>
	I0831 22:27:41.481009   32390 main.go:141] libmachine: (ha-957517)       <target type='serial' port='0'/>
	I0831 22:27:41.481018   32390 main.go:141] libmachine: (ha-957517)     </console>
	I0831 22:27:41.481039   32390 main.go:141] libmachine: (ha-957517)     <rng model='virtio'>
	I0831 22:27:41.481052   32390 main.go:141] libmachine: (ha-957517)       <backend model='random'>/dev/random</backend>
	I0831 22:27:41.481066   32390 main.go:141] libmachine: (ha-957517)     </rng>
	I0831 22:27:41.481085   32390 main.go:141] libmachine: (ha-957517)     
	I0831 22:27:41.481091   32390 main.go:141] libmachine: (ha-957517)     
	I0831 22:27:41.481101   32390 main.go:141] libmachine: (ha-957517)   </devices>
	I0831 22:27:41.481110   32390 main.go:141] libmachine: (ha-957517) </domain>
	I0831 22:27:41.481125   32390 main.go:141] libmachine: (ha-957517) 
	I0831 22:27:41.485236   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:1a:2a:50 in network default
	I0831 22:27:41.485771   32390 main.go:141] libmachine: (ha-957517) Ensuring networks are active...
	I0831 22:27:41.485792   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:41.486524   32390 main.go:141] libmachine: (ha-957517) Ensuring network default is active
	I0831 22:27:41.486828   32390 main.go:141] libmachine: (ha-957517) Ensuring network mk-ha-957517 is active
	I0831 22:27:41.487389   32390 main.go:141] libmachine: (ha-957517) Getting domain xml...
	I0831 22:27:41.488032   32390 main.go:141] libmachine: (ha-957517) Creating domain...
	I0831 22:27:42.668686   32390 main.go:141] libmachine: (ha-957517) Waiting to get IP...
	I0831 22:27:42.669539   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:42.669902   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:42.669946   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:42.669902   32413 retry.go:31] will retry after 310.308268ms: waiting for machine to come up
	I0831 22:27:42.981397   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:42.981861   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:42.981881   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:42.981820   32413 retry.go:31] will retry after 344.443306ms: waiting for machine to come up
	I0831 22:27:43.328335   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:43.328772   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:43.328794   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:43.328726   32413 retry.go:31] will retry after 365.569469ms: waiting for machine to come up
	I0831 22:27:43.696166   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:43.696619   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:43.696647   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:43.696574   32413 retry.go:31] will retry after 401.219481ms: waiting for machine to come up
	I0831 22:27:44.099095   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:44.099616   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:44.099645   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:44.099568   32413 retry.go:31] will retry after 481.487587ms: waiting for machine to come up
	I0831 22:27:44.583472   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:44.583852   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:44.583880   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:44.583807   32413 retry.go:31] will retry after 687.283133ms: waiting for machine to come up
	I0831 22:27:45.272575   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:45.272996   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:45.273036   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:45.272916   32413 retry.go:31] will retry after 1.085305512s: waiting for machine to come up
	I0831 22:27:46.359260   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:46.359786   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:46.359814   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:46.359737   32413 retry.go:31] will retry after 1.165071673s: waiting for machine to come up
	I0831 22:27:47.526987   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:47.527401   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:47.527434   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:47.527370   32413 retry.go:31] will retry after 1.255910404s: waiting for machine to come up
	I0831 22:27:48.784746   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:48.785208   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:48.785237   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:48.785174   32413 retry.go:31] will retry after 2.245132247s: waiting for machine to come up
	I0831 22:27:51.033508   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:51.033946   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:51.033972   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:51.033914   32413 retry.go:31] will retry after 1.78980009s: waiting for machine to come up
	I0831 22:27:52.824792   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:52.825224   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:52.825251   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:52.825164   32413 retry.go:31] will retry after 2.949499003s: waiting for machine to come up
	I0831 22:27:55.776461   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:55.776812   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:55.776836   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:55.776778   32413 retry.go:31] will retry after 2.977555208s: waiting for machine to come up
	I0831 22:27:58.757418   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:58.757866   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:58.757901   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:58.757797   32413 retry.go:31] will retry after 4.155208137s: waiting for machine to come up
	I0831 22:28:02.915266   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.915647   32390 main.go:141] libmachine: (ha-957517) Found IP for machine: 192.168.39.137
	I0831 22:28:02.915669   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has current primary IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.915686   32390 main.go:141] libmachine: (ha-957517) Reserving static IP address...
	I0831 22:28:02.916095   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find host DHCP lease matching {name: "ha-957517", mac: "52:54:00:e0:42:4f", ip: "192.168.39.137"} in network mk-ha-957517
	I0831 22:28:02.987594   32390 main.go:141] libmachine: (ha-957517) DBG | Getting to WaitForSSH function...
	I0831 22:28:02.987619   32390 main.go:141] libmachine: (ha-957517) Reserved static IP address: 192.168.39.137
	I0831 22:28:02.987631   32390 main.go:141] libmachine: (ha-957517) Waiting for SSH to be available...
	I0831 22:28:02.989870   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.990315   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:02.990355   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.990452   32390 main.go:141] libmachine: (ha-957517) DBG | Using SSH client type: external
	I0831 22:28:02.990478   32390 main.go:141] libmachine: (ha-957517) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa (-rw-------)
	I0831 22:28:02.990505   32390 main.go:141] libmachine: (ha-957517) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:28:02.990512   32390 main.go:141] libmachine: (ha-957517) DBG | About to run SSH command:
	I0831 22:28:02.990520   32390 main.go:141] libmachine: (ha-957517) DBG | exit 0
	I0831 22:28:03.111793   32390 main.go:141] libmachine: (ha-957517) DBG | SSH cmd err, output: <nil>: 
	I0831 22:28:03.112053   32390 main.go:141] libmachine: (ha-957517) KVM machine creation complete!
	I0831 22:28:03.112363   32390 main.go:141] libmachine: (ha-957517) Calling .GetConfigRaw
	I0831 22:28:03.112895   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:03.113083   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:03.113263   32390 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:28:03.113275   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:03.114482   32390 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:28:03.114501   32390 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:28:03.114506   32390 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:28:03.114512   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.116359   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.116653   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.116689   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.116785   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.116970   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.117100   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.117227   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.117377   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.117581   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.117595   32390 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:28:03.218736   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:03.218762   32390 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:28:03.218772   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.221652   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.222004   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.222029   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.222172   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.222366   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.222668   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.222832   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.223022   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.223200   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.223213   32390 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:28:03.324409   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:28:03.324513   32390 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:28:03.324523   32390 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:28:03.324530   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:28:03.324771   32390 buildroot.go:166] provisioning hostname "ha-957517"
	I0831 22:28:03.324797   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:28:03.324976   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.327800   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.328195   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.328222   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.328351   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.328546   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.328726   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.328850   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.329007   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.329250   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.329269   32390 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517 && echo "ha-957517" | sudo tee /etc/hostname
	I0831 22:28:03.446380   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:28:03.446408   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.448995   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.449406   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.449440   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.449618   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.449796   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.449947   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.450054   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.450247   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.450503   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.450525   32390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:28:03.560618   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:03.560652   32390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:28:03.560696   32390 buildroot.go:174] setting up certificates
	I0831 22:28:03.560711   32390 provision.go:84] configureAuth start
	I0831 22:28:03.560725   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:28:03.560979   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:03.563370   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.563685   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.563723   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.563847   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.566002   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.566315   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.566337   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.566486   32390 provision.go:143] copyHostCerts
	I0831 22:28:03.566513   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:03.566555   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:28:03.566577   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:03.566654   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:28:03.566767   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:03.566792   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:28:03.566798   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:03.566831   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:28:03.566903   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:03.566928   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:28:03.566936   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:03.566969   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:28:03.567051   32390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517 san=[127.0.0.1 192.168.39.137 ha-957517 localhost minikube]
	I0831 22:28:03.720987   32390 provision.go:177] copyRemoteCerts
	I0831 22:28:03.721048   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:28:03.721087   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.723766   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.724157   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.724186   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.724393   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.724584   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.724739   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.724945   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:03.805712   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:28:03.805793   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:28:03.831097   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:28:03.831177   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0831 22:28:03.856577   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:28:03.856660   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:28:03.881472   32390 provision.go:87] duration metric: took 320.748156ms to configureAuth
	I0831 22:28:03.881495   32390 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:28:03.881686   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:03.881783   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.884343   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.884689   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.884714   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.884885   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.885065   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.885210   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.885359   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.885492   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.885703   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.885730   32390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:28:04.108228   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:28:04.108261   32390 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:28:04.108271   32390 main.go:141] libmachine: (ha-957517) Calling .GetURL
	I0831 22:28:04.109625   32390 main.go:141] libmachine: (ha-957517) DBG | Using libvirt version 6000000
	I0831 22:28:04.111887   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.112243   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.112267   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.112409   32390 main.go:141] libmachine: Docker is up and running!
	I0831 22:28:04.112419   32390 main.go:141] libmachine: Reticulating splines...
	I0831 22:28:04.112425   32390 client.go:171] duration metric: took 23.07870571s to LocalClient.Create
	I0831 22:28:04.112453   32390 start.go:167] duration metric: took 23.078815782s to libmachine.API.Create "ha-957517"
	I0831 22:28:04.112467   32390 start.go:293] postStartSetup for "ha-957517" (driver="kvm2")
	I0831 22:28:04.112480   32390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:28:04.112496   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.112750   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:28:04.112775   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.115036   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.115383   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.115412   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.115584   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.115787   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.115922   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.116081   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:04.198336   32390 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:28:04.202829   32390 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:28:04.202861   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:28:04.202932   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:28:04.203001   32390 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:28:04.203017   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:28:04.203150   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:28:04.212859   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:04.238294   32390 start.go:296] duration metric: took 125.815024ms for postStartSetup
	I0831 22:28:04.238369   32390 main.go:141] libmachine: (ha-957517) Calling .GetConfigRaw
	I0831 22:28:04.238895   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:04.241472   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.241847   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.241875   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.242112   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:04.242302   32390 start.go:128] duration metric: took 23.226421296s to createHost
	I0831 22:28:04.242341   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.244781   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.245093   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.245115   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.245245   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.245442   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.245622   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.245780   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.245944   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:04.246103   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:04.246117   32390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:28:04.348106   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143284.316550028
	
	I0831 22:28:04.348134   32390 fix.go:216] guest clock: 1725143284.316550028
	I0831 22:28:04.348145   32390 fix.go:229] Guest: 2024-08-31 22:28:04.316550028 +0000 UTC Remote: 2024-08-31 22:28:04.242320677 +0000 UTC m=+23.331086893 (delta=74.229351ms)
	I0831 22:28:04.348202   32390 fix.go:200] guest clock delta is within tolerance: 74.229351ms
	I0831 22:28:04.348212   32390 start.go:83] releasing machines lock for "ha-957517", held for 23.332394313s
	I0831 22:28:04.348252   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.348525   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:04.350920   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.351259   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.351283   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.351454   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.351888   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.352047   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.352114   32390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:28:04.352158   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.352258   32390 ssh_runner.go:195] Run: cat /version.json
	I0831 22:28:04.352282   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.355100   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355471   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.355497   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355518   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355615   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.355822   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.355880   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.355906   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355973   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.356052   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.356138   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:04.356223   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.356358   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.356505   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:04.459874   32390 ssh_runner.go:195] Run: systemctl --version
	I0831 22:28:04.466047   32390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:28:04.625348   32390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:28:04.631490   32390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:28:04.631564   32390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:28:04.648534   32390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:28:04.648559   32390 start.go:495] detecting cgroup driver to use...
	I0831 22:28:04.648650   32390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:28:04.666821   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:28:04.680864   32390 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:28:04.680936   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:28:04.695065   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:28:04.709207   32390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:28:04.831827   32390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:28:04.991491   32390 docker.go:233] disabling docker service ...
	I0831 22:28:04.991550   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:28:05.006362   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:28:05.019197   32390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:28:05.142686   32390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:28:05.255408   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:28:05.270969   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:28:05.290387   32390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:28:05.290460   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.301062   32390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:28:05.301145   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.311884   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.322301   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.333290   32390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:28:05.344688   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.356344   32390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.377326   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.388494   32390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:28:05.398978   32390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:28:05.399043   32390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:28:05.413376   32390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:28:05.423525   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:05.535870   32390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:28:05.632034   32390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:28:05.632113   32390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:28:05.637232   32390 start.go:563] Will wait 60s for crictl version
	I0831 22:28:05.637289   32390 ssh_runner.go:195] Run: which crictl
	I0831 22:28:05.641234   32390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:28:05.685711   32390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:28:05.685799   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:05.716694   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:05.750305   32390 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:28:05.751458   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:05.754007   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:05.754345   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:05.754373   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:05.754564   32390 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:28:05.758880   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:05.772469   32390 kubeadm.go:883] updating cluster {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:28:05.772597   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:28:05.772670   32390 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:28:05.810153   32390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0831 22:28:05.810225   32390 ssh_runner.go:195] Run: which lz4
	I0831 22:28:05.814402   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0831 22:28:05.814517   32390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 22:28:05.818880   32390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 22:28:05.818915   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0831 22:28:07.153042   32390 crio.go:462] duration metric: took 1.338576702s to copy over tarball
	I0831 22:28:07.153109   32390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 22:28:09.169452   32390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.016316735s)
	I0831 22:28:09.169480   32390 crio.go:469] duration metric: took 2.016414434s to extract the tarball
	I0831 22:28:09.169490   32390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 22:28:09.206468   32390 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:28:09.251895   32390 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:28:09.251918   32390 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:28:09.251927   32390 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.31.0 crio true true} ...
	I0831 22:28:09.252050   32390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:28:09.252109   32390 ssh_runner.go:195] Run: crio config
	I0831 22:28:09.300337   32390 cni.go:84] Creating CNI manager for ""
	I0831 22:28:09.300355   32390 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0831 22:28:09.300376   32390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:28:09.300401   32390 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957517 NodeName:ha-957517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:28:09.300516   32390 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-957517"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:28:09.300540   32390 kube-vip.go:115] generating kube-vip config ...
	I0831 22:28:09.300579   32390 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:28:09.318427   32390 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:28:09.318606   32390 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:28:09.318662   32390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:28:09.328707   32390 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:28:09.328775   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 22:28:09.338384   32390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0831 22:28:09.354709   32390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:28:09.370922   32390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0831 22:28:09.387555   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0831 22:28:09.403236   32390 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:28:09.407029   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:09.418828   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:09.544083   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:28:09.561788   32390 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.137
	I0831 22:28:09.561811   32390 certs.go:194] generating shared ca certs ...
	I0831 22:28:09.561830   32390 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:09.562005   32390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:28:09.562071   32390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:28:09.562086   32390 certs.go:256] generating profile certs ...
	I0831 22:28:09.562181   32390 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:28:09.562205   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt with IP's: []
	I0831 22:28:09.805603   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt ...
	I0831 22:28:09.805631   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt: {Name:mk3c85a6e367e84685bb8c9f750a4856c91ffd84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:09.805800   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key ...
	I0831 22:28:09.805818   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key: {Name:mk0b319fe409d802a990382870a94357c6813c0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:09.805891   32390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb
	I0831 22:28:09.805906   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.254]
	I0831 22:28:10.075422   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb ...
	I0831 22:28:10.075459   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb: {Name:mkb0b898c9451ea30d4110b419afe0b46b519093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.075652   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb ...
	I0831 22:28:10.075671   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb: {Name:mk368a6acd117e80f148d343fe5bc16885fa570c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.075770   32390 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:28:10.075859   32390 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:28:10.075910   32390 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:28:10.075923   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt with IP's: []
	I0831 22:28:10.260972   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt ...
	I0831 22:28:10.261000   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt: {Name:mkd0e3c1a312c99613f089ee0d75d00d8bc80cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.261194   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key ...
	I0831 22:28:10.261208   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key: {Name:mk478e548843f346c10b2feee222cdac2656123b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.261303   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:28:10.261321   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:28:10.261331   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:28:10.261344   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:28:10.261355   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:28:10.261366   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:28:10.261378   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:28:10.261388   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:28:10.261437   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:28:10.261471   32390 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:28:10.261480   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:28:10.261500   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:28:10.261521   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:28:10.261542   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:28:10.261577   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:10.261605   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.261620   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.261632   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.262174   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:28:10.290061   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:28:10.325128   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:28:10.367986   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:28:10.392476   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0831 22:28:10.416507   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:28:10.440851   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:28:10.465514   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:28:10.489898   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:28:10.513771   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:28:10.537008   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:28:10.560113   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:28:10.576451   32390 ssh_runner.go:195] Run: openssl version
	I0831 22:28:10.582089   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:28:10.592778   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.596976   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.597015   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.602591   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:28:10.612819   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:28:10.623013   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.627196   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.627234   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.632686   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:28:10.642922   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:28:10.653108   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.657491   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.657536   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.663173   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:28:10.673710   32390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:28:10.677849   32390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:28:10.677901   32390 kubeadm.go:392] StartCluster: {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:28:10.677982   32390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:28:10.678033   32390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:28:10.722302   32390 cri.go:89] found id: ""
	I0831 22:28:10.722358   32390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:28:10.732118   32390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:28:10.741461   32390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:28:10.753013   32390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:28:10.753033   32390 kubeadm.go:157] found existing configuration files:
	
	I0831 22:28:10.753082   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:28:10.762969   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:28:10.763029   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:28:10.772864   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:28:10.782644   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:28:10.782699   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:28:10.792127   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:28:10.801207   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:28:10.801265   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:28:10.810482   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:28:10.819184   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:28:10.819237   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:28:10.828649   32390 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 22:28:10.925572   32390 kubeadm.go:310] W0831 22:28:10.899153     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:28:10.928719   32390 kubeadm.go:310] W0831 22:28:10.902326     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:28:11.038797   32390 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:28:22.047996   32390 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:28:22.048073   32390 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:28:22.048184   32390 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:28:22.048314   32390 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:28:22.048434   32390 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:28:22.048528   32390 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:28:22.049992   32390 out.go:235]   - Generating certificates and keys ...
	I0831 22:28:22.050074   32390 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:28:22.050157   32390 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:28:22.050246   32390 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:28:22.050312   32390 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:28:22.050531   32390 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:28:22.050599   32390 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:28:22.050674   32390 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:28:22.050837   32390 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-957517 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0831 22:28:22.050925   32390 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:28:22.051092   32390 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-957517 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0831 22:28:22.051189   32390 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:28:22.051247   32390 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:28:22.051285   32390 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:28:22.051355   32390 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:28:22.051406   32390 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:28:22.051454   32390 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:28:22.051498   32390 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:28:22.051591   32390 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:28:22.051660   32390 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:28:22.051774   32390 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:28:22.051866   32390 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:28:22.053545   32390 out.go:235]   - Booting up control plane ...
	I0831 22:28:22.053636   32390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:28:22.053724   32390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:28:22.053807   32390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:28:22.053929   32390 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:28:22.054024   32390 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:28:22.054063   32390 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:28:22.054166   32390 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:28:22.054251   32390 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:28:22.054299   32390 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.426939ms
	I0831 22:28:22.054360   32390 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:28:22.054412   32390 kubeadm.go:310] [api-check] The API server is healthy after 5.955214171s
	I0831 22:28:22.054501   32390 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:28:22.054606   32390 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:28:22.054654   32390 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:28:22.054810   32390 kubeadm.go:310] [mark-control-plane] Marking the node ha-957517 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:28:22.054895   32390 kubeadm.go:310] [bootstrap-token] Using token: g1v7x3.21whabocm7k8avb9
	I0831 22:28:22.056571   32390 out.go:235]   - Configuring RBAC rules ...
	I0831 22:28:22.056676   32390 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:28:22.056769   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:28:22.056933   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:28:22.057043   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:28:22.057146   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:28:22.057223   32390 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:28:22.057315   32390 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:28:22.057356   32390 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:28:22.057400   32390 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:28:22.057406   32390 kubeadm.go:310] 
	I0831 22:28:22.057473   32390 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:28:22.057482   32390 kubeadm.go:310] 
	I0831 22:28:22.057591   32390 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:28:22.057604   32390 kubeadm.go:310] 
	I0831 22:28:22.057645   32390 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:28:22.057730   32390 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:28:22.057802   32390 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:28:22.057809   32390 kubeadm.go:310] 
	I0831 22:28:22.057853   32390 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:28:22.057858   32390 kubeadm.go:310] 
	I0831 22:28:22.057897   32390 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:28:22.057903   32390 kubeadm.go:310] 
	I0831 22:28:22.057948   32390 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:28:22.058015   32390 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:28:22.058080   32390 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:28:22.058086   32390 kubeadm.go:310] 
	I0831 22:28:22.058156   32390 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:28:22.058220   32390 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:28:22.058228   32390 kubeadm.go:310] 
	I0831 22:28:22.058298   32390 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g1v7x3.21whabocm7k8avb9 \
	I0831 22:28:22.058425   32390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e \
	I0831 22:28:22.058460   32390 kubeadm.go:310] 	--control-plane 
	I0831 22:28:22.058466   32390 kubeadm.go:310] 
	I0831 22:28:22.058542   32390 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:28:22.058550   32390 kubeadm.go:310] 
	I0831 22:28:22.058635   32390 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g1v7x3.21whabocm7k8avb9 \
	I0831 22:28:22.058740   32390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e 
	I0831 22:28:22.058763   32390 cni.go:84] Creating CNI manager for ""
	I0831 22:28:22.058772   32390 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0831 22:28:22.060436   32390 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0831 22:28:22.061811   32390 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0831 22:28:22.067545   32390 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0831 22:28:22.067562   32390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0831 22:28:22.087664   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0831 22:28:22.518741   32390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:28:22.518778   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:22.518852   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957517 minikube.k8s.io/updated_at=2024_08_31T22_28_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=ha-957517 minikube.k8s.io/primary=true
	I0831 22:28:22.732258   32390 ops.go:34] apiserver oom_adj: -16
	I0831 22:28:22.732336   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:23.233261   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:23.733264   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:24.232391   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:24.733079   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:25.233160   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:25.732935   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:26.232794   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:26.334082   32390 kubeadm.go:1113] duration metric: took 3.81535594s to wait for elevateKubeSystemPrivileges
	I0831 22:28:26.334116   32390 kubeadm.go:394] duration metric: took 15.656216472s to StartCluster
	I0831 22:28:26.334136   32390 settings.go:142] acquiring lock: {Name:mkec6b4f5d3301688503002977bc4d63aab7adcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:26.334225   32390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:28:26.334844   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/kubeconfig: {Name:mkc6d6b60cc62b336d228fe4b49e098aa4d94f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:26.335060   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:28:26.335087   32390 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:28:26.335110   32390 start.go:241] waiting for startup goroutines ...
	I0831 22:28:26.335118   32390 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 22:28:26.335178   32390 addons.go:69] Setting storage-provisioner=true in profile "ha-957517"
	I0831 22:28:26.335188   32390 addons.go:69] Setting default-storageclass=true in profile "ha-957517"
	I0831 22:28:26.335209   32390 addons.go:234] Setting addon storage-provisioner=true in "ha-957517"
	I0831 22:28:26.335217   32390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-957517"
	I0831 22:28:26.335249   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:26.335298   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:26.335614   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.335641   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.335654   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.335685   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.350245   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0831 22:28:26.350634   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.351162   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.351187   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.351527   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.351727   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:26.353759   32390 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:28:26.354122   32390 kapi.go:59] client config for ha-957517: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key", CAFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f192a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 22:28:26.354622   32390 cert_rotation.go:140] Starting client certificate rotation controller
	I0831 22:28:26.354884   32390 addons.go:234] Setting addon default-storageclass=true in "ha-957517"
	I0831 22:28:26.354919   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:26.355016   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I0831 22:28:26.355291   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.355317   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.355396   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.355844   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.355865   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.356199   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.356783   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.356814   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.369948   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I0831 22:28:26.370424   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.370778   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34193
	I0831 22:28:26.370919   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.370938   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.371101   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.371229   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.371503   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.371520   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.371754   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.371790   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.371855   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.372022   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:26.373948   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:26.376433   32390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:28:26.377866   32390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:28:26.377882   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:28:26.377900   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:26.380588   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.380988   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:26.381022   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.381206   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:26.381381   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:26.381543   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:26.381698   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:26.387026   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0831 22:28:26.387349   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.387733   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.387755   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.388032   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.388180   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:26.389338   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:26.389500   32390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:28:26.389513   32390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:28:26.389527   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:26.392386   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.392807   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:26.392834   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.392973   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:26.393125   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:26.393297   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:26.393429   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:26.508204   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:28:26.520058   32390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:28:26.529019   32390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:28:26.992273   32390 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0831 22:28:27.172898   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.172918   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.172963   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.172980   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.173231   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173246   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.173253   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.173262   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.173345   32390 main.go:141] libmachine: (ha-957517) DBG | Closing plugin on server side
	I0831 22:28:27.173357   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173366   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.173381   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.173392   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.173461   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173477   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.173537   32390 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 22:28:27.173555   32390 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 22:28:27.173669   32390 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0831 22:28:27.173679   32390 round_trippers.go:469] Request Headers:
	I0831 22:28:27.173691   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:28:27.173688   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173696   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:28:27.173706   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.185379   32390 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0831 22:28:27.186095   32390 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0831 22:28:27.186109   32390 round_trippers.go:469] Request Headers:
	I0831 22:28:27.186121   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:28:27.186130   32390 round_trippers.go:473]     Content-Type: application/json
	I0831 22:28:27.186136   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:28:27.191616   32390 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 22:28:27.191837   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.191853   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.192084   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.192101   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.194185   32390 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0831 22:28:27.195609   32390 addons.go:510] duration metric: took 860.487547ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0831 22:28:27.195646   32390 start.go:246] waiting for cluster config update ...
	I0831 22:28:27.195661   32390 start.go:255] writing updated cluster config ...
	I0831 22:28:27.197202   32390 out.go:201] 
	I0831 22:28:27.198596   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:27.198655   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:27.200253   32390 out.go:177] * Starting "ha-957517-m02" control-plane node in "ha-957517" cluster
	I0831 22:28:27.201593   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:28:27.201611   32390 cache.go:56] Caching tarball of preloaded images
	I0831 22:28:27.201711   32390 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:28:27.201725   32390 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:28:27.201780   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:27.202121   32390 start.go:360] acquireMachinesLock for ha-957517-m02: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:28:27.202167   32390 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "ha-957517-m02"
	I0831 22:28:27.202189   32390 start.go:93] Provisioning new machine with config: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:28:27.202258   32390 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0831 22:28:27.203830   32390 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 22:28:27.203917   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:27.203948   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:27.218470   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0831 22:28:27.218899   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:27.219448   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:27.219466   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:27.219768   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:27.219957   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:27.220099   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:27.220282   32390 start.go:159] libmachine.API.Create for "ha-957517" (driver="kvm2")
	I0831 22:28:27.220303   32390 client.go:168] LocalClient.Create starting
	I0831 22:28:27.220332   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:28:27.220369   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:28:27.220388   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:28:27.220457   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:28:27.220482   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:28:27.220508   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:28:27.220533   32390 main.go:141] libmachine: Running pre-create checks...
	I0831 22:28:27.220544   32390 main.go:141] libmachine: (ha-957517-m02) Calling .PreCreateCheck
	I0831 22:28:27.220699   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetConfigRaw
	I0831 22:28:27.221082   32390 main.go:141] libmachine: Creating machine...
	I0831 22:28:27.221096   32390 main.go:141] libmachine: (ha-957517-m02) Calling .Create
	I0831 22:28:27.221224   32390 main.go:141] libmachine: (ha-957517-m02) Creating KVM machine...
	I0831 22:28:27.222386   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found existing default KVM network
	I0831 22:28:27.222566   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found existing private KVM network mk-ha-957517
	I0831 22:28:27.222708   32390 main.go:141] libmachine: (ha-957517-m02) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02 ...
	I0831 22:28:27.222727   32390 main.go:141] libmachine: (ha-957517-m02) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:28:27.222810   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.222710   32754 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:28:27.222899   32390 main.go:141] libmachine: (ha-957517-m02) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:28:27.464061   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.463924   32754 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa...
	I0831 22:28:27.596673   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.596561   32754 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/ha-957517-m02.rawdisk...
	I0831 22:28:27.596706   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Writing magic tar header
	I0831 22:28:27.596724   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Writing SSH key tar header
	I0831 22:28:27.596736   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.596681   32754 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02 ...
	I0831 22:28:27.596840   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02
	I0831 22:28:27.596867   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02 (perms=drwx------)
	I0831 22:28:27.596879   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:28:27.596900   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:28:27.596915   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:28:27.596925   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:28:27.596941   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:28:27.596954   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:28:27.596966   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:28:27.596983   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:28:27.596995   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:28:27.597006   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:28:27.597015   32390 main.go:141] libmachine: (ha-957517-m02) Creating domain...
	I0831 22:28:27.597032   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home
	I0831 22:28:27.597043   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Skipping /home - not owner
	I0831 22:28:27.598016   32390 main.go:141] libmachine: (ha-957517-m02) define libvirt domain using xml: 
	I0831 22:28:27.598046   32390 main.go:141] libmachine: (ha-957517-m02) <domain type='kvm'>
	I0831 22:28:27.598057   32390 main.go:141] libmachine: (ha-957517-m02)   <name>ha-957517-m02</name>
	I0831 22:28:27.598071   32390 main.go:141] libmachine: (ha-957517-m02)   <memory unit='MiB'>2200</memory>
	I0831 22:28:27.598083   32390 main.go:141] libmachine: (ha-957517-m02)   <vcpu>2</vcpu>
	I0831 22:28:27.598095   32390 main.go:141] libmachine: (ha-957517-m02)   <features>
	I0831 22:28:27.598104   32390 main.go:141] libmachine: (ha-957517-m02)     <acpi/>
	I0831 22:28:27.598109   32390 main.go:141] libmachine: (ha-957517-m02)     <apic/>
	I0831 22:28:27.598114   32390 main.go:141] libmachine: (ha-957517-m02)     <pae/>
	I0831 22:28:27.598119   32390 main.go:141] libmachine: (ha-957517-m02)     
	I0831 22:28:27.598124   32390 main.go:141] libmachine: (ha-957517-m02)   </features>
	I0831 22:28:27.598130   32390 main.go:141] libmachine: (ha-957517-m02)   <cpu mode='host-passthrough'>
	I0831 22:28:27.598140   32390 main.go:141] libmachine: (ha-957517-m02)   
	I0831 22:28:27.598151   32390 main.go:141] libmachine: (ha-957517-m02)   </cpu>
	I0831 22:28:27.598156   32390 main.go:141] libmachine: (ha-957517-m02)   <os>
	I0831 22:28:27.598161   32390 main.go:141] libmachine: (ha-957517-m02)     <type>hvm</type>
	I0831 22:28:27.598166   32390 main.go:141] libmachine: (ha-957517-m02)     <boot dev='cdrom'/>
	I0831 22:28:27.598170   32390 main.go:141] libmachine: (ha-957517-m02)     <boot dev='hd'/>
	I0831 22:28:27.598176   32390 main.go:141] libmachine: (ha-957517-m02)     <bootmenu enable='no'/>
	I0831 22:28:27.598179   32390 main.go:141] libmachine: (ha-957517-m02)   </os>
	I0831 22:28:27.598187   32390 main.go:141] libmachine: (ha-957517-m02)   <devices>
	I0831 22:28:27.598192   32390 main.go:141] libmachine: (ha-957517-m02)     <disk type='file' device='cdrom'>
	I0831 22:28:27.598203   32390 main.go:141] libmachine: (ha-957517-m02)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/boot2docker.iso'/>
	I0831 22:28:27.598210   32390 main.go:141] libmachine: (ha-957517-m02)       <target dev='hdc' bus='scsi'/>
	I0831 22:28:27.598217   32390 main.go:141] libmachine: (ha-957517-m02)       <readonly/>
	I0831 22:28:27.598224   32390 main.go:141] libmachine: (ha-957517-m02)     </disk>
	I0831 22:28:27.598235   32390 main.go:141] libmachine: (ha-957517-m02)     <disk type='file' device='disk'>
	I0831 22:28:27.598244   32390 main.go:141] libmachine: (ha-957517-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:28:27.598257   32390 main.go:141] libmachine: (ha-957517-m02)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/ha-957517-m02.rawdisk'/>
	I0831 22:28:27.598268   32390 main.go:141] libmachine: (ha-957517-m02)       <target dev='hda' bus='virtio'/>
	I0831 22:28:27.598276   32390 main.go:141] libmachine: (ha-957517-m02)     </disk>
	I0831 22:28:27.598286   32390 main.go:141] libmachine: (ha-957517-m02)     <interface type='network'>
	I0831 22:28:27.598311   32390 main.go:141] libmachine: (ha-957517-m02)       <source network='mk-ha-957517'/>
	I0831 22:28:27.598334   32390 main.go:141] libmachine: (ha-957517-m02)       <model type='virtio'/>
	I0831 22:28:27.598346   32390 main.go:141] libmachine: (ha-957517-m02)     </interface>
	I0831 22:28:27.598357   32390 main.go:141] libmachine: (ha-957517-m02)     <interface type='network'>
	I0831 22:28:27.598368   32390 main.go:141] libmachine: (ha-957517-m02)       <source network='default'/>
	I0831 22:28:27.598379   32390 main.go:141] libmachine: (ha-957517-m02)       <model type='virtio'/>
	I0831 22:28:27.598390   32390 main.go:141] libmachine: (ha-957517-m02)     </interface>
	I0831 22:28:27.598400   32390 main.go:141] libmachine: (ha-957517-m02)     <serial type='pty'>
	I0831 22:28:27.598433   32390 main.go:141] libmachine: (ha-957517-m02)       <target port='0'/>
	I0831 22:28:27.598455   32390 main.go:141] libmachine: (ha-957517-m02)     </serial>
	I0831 22:28:27.598469   32390 main.go:141] libmachine: (ha-957517-m02)     <console type='pty'>
	I0831 22:28:27.598483   32390 main.go:141] libmachine: (ha-957517-m02)       <target type='serial' port='0'/>
	I0831 22:28:27.598497   32390 main.go:141] libmachine: (ha-957517-m02)     </console>
	I0831 22:28:27.598511   32390 main.go:141] libmachine: (ha-957517-m02)     <rng model='virtio'>
	I0831 22:28:27.598526   32390 main.go:141] libmachine: (ha-957517-m02)       <backend model='random'>/dev/random</backend>
	I0831 22:28:27.598537   32390 main.go:141] libmachine: (ha-957517-m02)     </rng>
	I0831 22:28:27.598546   32390 main.go:141] libmachine: (ha-957517-m02)     
	I0831 22:28:27.598561   32390 main.go:141] libmachine: (ha-957517-m02)     
	I0831 22:28:27.598573   32390 main.go:141] libmachine: (ha-957517-m02)   </devices>
	I0831 22:28:27.598583   32390 main.go:141] libmachine: (ha-957517-m02) </domain>
	I0831 22:28:27.598595   32390 main.go:141] libmachine: (ha-957517-m02) 
	I0831 22:28:27.606046   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:77:f7:02 in network default
	I0831 22:28:27.606628   32390 main.go:141] libmachine: (ha-957517-m02) Ensuring networks are active...
	I0831 22:28:27.606653   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:27.607321   32390 main.go:141] libmachine: (ha-957517-m02) Ensuring network default is active
	I0831 22:28:27.607692   32390 main.go:141] libmachine: (ha-957517-m02) Ensuring network mk-ha-957517 is active
	I0831 22:28:27.608118   32390 main.go:141] libmachine: (ha-957517-m02) Getting domain xml...
	I0831 22:28:27.608771   32390 main.go:141] libmachine: (ha-957517-m02) Creating domain...
	I0831 22:28:28.787010   32390 main.go:141] libmachine: (ha-957517-m02) Waiting to get IP...
	I0831 22:28:28.787751   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:28.788105   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:28.788128   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:28.788090   32754 retry.go:31] will retry after 243.362281ms: waiting for machine to come up
	I0831 22:28:29.033610   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:29.034078   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:29.034096   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:29.034050   32754 retry.go:31] will retry after 243.613799ms: waiting for machine to come up
	I0831 22:28:29.279508   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:29.279930   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:29.279969   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:29.279892   32754 retry.go:31] will retry after 359.068943ms: waiting for machine to come up
	I0831 22:28:29.641640   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:29.642053   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:29.642074   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:29.642015   32754 retry.go:31] will retry after 517.837365ms: waiting for machine to come up
	I0831 22:28:30.161608   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:30.162039   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:30.162069   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:30.161994   32754 retry.go:31] will retry after 556.118435ms: waiting for machine to come up
	I0831 22:28:30.719681   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:30.720157   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:30.720186   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:30.720091   32754 retry.go:31] will retry after 830.853012ms: waiting for machine to come up
	I0831 22:28:31.552034   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:31.552488   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:31.552519   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:31.552440   32754 retry.go:31] will retry after 1.186910615s: waiting for machine to come up
	I0831 22:28:32.740382   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:32.740794   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:32.740815   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:32.740769   32754 retry.go:31] will retry after 1.401520174s: waiting for machine to come up
	I0831 22:28:34.144309   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:34.144770   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:34.144797   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:34.144733   32754 retry.go:31] will retry after 1.316598575s: waiting for machine to come up
	I0831 22:28:35.463142   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:35.463557   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:35.463590   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:35.463507   32754 retry.go:31] will retry after 2.182834787s: waiting for machine to come up
	I0831 22:28:37.648250   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:37.648795   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:37.648823   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:37.648745   32754 retry.go:31] will retry after 2.150253237s: waiting for machine to come up
	I0831 22:28:39.800341   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:39.800795   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:39.800816   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:39.800763   32754 retry.go:31] will retry after 2.340318676s: waiting for machine to come up
	I0831 22:28:42.142343   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:42.142784   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:42.142816   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:42.142730   32754 retry.go:31] will retry after 3.297096591s: waiting for machine to come up
	I0831 22:28:45.441400   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:45.441730   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:45.441752   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:45.441682   32754 retry.go:31] will retry after 5.294406767s: waiting for machine to come up
	I0831 22:28:50.739962   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.740377   32390 main.go:141] libmachine: (ha-957517-m02) Found IP for machine: 192.168.39.61
	I0831 22:28:50.740407   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has current primary IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.740416   32390 main.go:141] libmachine: (ha-957517-m02) Reserving static IP address...
	I0831 22:28:50.740741   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find host DHCP lease matching {name: "ha-957517-m02", mac: "52:54:00:d0:a3:98", ip: "192.168.39.61"} in network mk-ha-957517
	I0831 22:28:50.811691   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Getting to WaitForSSH function...
	I0831 22:28:50.811722   32390 main.go:141] libmachine: (ha-957517-m02) Reserved static IP address: 192.168.39.61
	I0831 22:28:50.811735   32390 main.go:141] libmachine: (ha-957517-m02) Waiting for SSH to be available...
	I0831 22:28:50.814182   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.814543   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:50.814561   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.814759   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Using SSH client type: external
	I0831 22:28:50.814784   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa (-rw-------)
	I0831 22:28:50.814814   32390 main.go:141] libmachine: (ha-957517-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:28:50.814831   32390 main.go:141] libmachine: (ha-957517-m02) DBG | About to run SSH command:
	I0831 22:28:50.814846   32390 main.go:141] libmachine: (ha-957517-m02) DBG | exit 0
	I0831 22:28:50.943179   32390 main.go:141] libmachine: (ha-957517-m02) DBG | SSH cmd err, output: <nil>: 
	I0831 22:28:50.943449   32390 main.go:141] libmachine: (ha-957517-m02) KVM machine creation complete!
	I0831 22:28:50.943801   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetConfigRaw
	I0831 22:28:50.944338   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:50.944529   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:50.944697   32390 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:28:50.944710   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:28:50.945995   32390 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:28:50.946011   32390 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:28:50.946017   32390 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:28:50.946023   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:50.948383   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.948775   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:50.948801   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.948948   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:50.949120   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:50.949270   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:50.949392   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:50.949575   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:50.949780   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:50.949793   32390 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:28:51.058642   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:51.058665   32390 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:28:51.058676   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.061589   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.061990   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.062011   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.062214   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.062391   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.062559   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.062704   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.062875   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.063065   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.063077   32390 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:28:51.171729   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:28:51.171805   32390 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:28:51.171815   32390 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:28:51.171824   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:51.172081   32390 buildroot.go:166] provisioning hostname "ha-957517-m02"
	I0831 22:28:51.172110   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:51.172298   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.174636   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.174937   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.174963   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.175084   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.175367   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.175620   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.175770   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.175940   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.176103   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.176115   32390 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517-m02 && echo "ha-957517-m02" | sudo tee /etc/hostname
	I0831 22:28:51.297517   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517-m02
	
	I0831 22:28:51.297541   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.300104   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.300437   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.300460   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.300605   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.300778   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.300923   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.301019   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.301206   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.301364   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.301380   32390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:28:51.420951   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:51.420978   32390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:28:51.420996   32390 buildroot.go:174] setting up certificates
	I0831 22:28:51.421010   32390 provision.go:84] configureAuth start
	I0831 22:28:51.421022   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:51.421294   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:51.423809   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.424172   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.424196   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.424326   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.426435   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.426694   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.426706   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.426861   32390 provision.go:143] copyHostCerts
	I0831 22:28:51.426886   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:51.426923   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:28:51.426932   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:51.427012   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:28:51.427136   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:51.427415   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:28:51.427434   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:51.427479   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:28:51.427643   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:51.427666   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:28:51.427672   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:51.427705   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:28:51.427790   32390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517-m02 san=[127.0.0.1 192.168.39.61 ha-957517-m02 localhost minikube]
	I0831 22:28:51.541189   32390 provision.go:177] copyRemoteCerts
	I0831 22:28:51.541254   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:28:51.541284   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.544087   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.544393   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.544418   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.544657   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.544882   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.545038   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.545186   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:51.629304   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:28:51.629365   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:28:51.654038   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:28:51.654101   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:28:51.678394   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:28:51.678465   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:28:51.701780   32390 provision.go:87] duration metric: took 280.752455ms to configureAuth
	I0831 22:28:51.701807   32390 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:28:51.702001   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:51.702090   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.704677   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.705020   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.705047   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.705250   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.705424   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.705583   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.705740   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.705916   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.706060   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.706074   32390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:28:51.929211   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:28:51.929239   32390 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:28:51.929248   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetURL
	I0831 22:28:51.930425   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Using libvirt version 6000000
	I0831 22:28:51.932552   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.932820   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.932850   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.932962   32390 main.go:141] libmachine: Docker is up and running!
	I0831 22:28:51.932978   32390 main.go:141] libmachine: Reticulating splines...
	I0831 22:28:51.932986   32390 client.go:171] duration metric: took 24.7126751s to LocalClient.Create
	I0831 22:28:51.933009   32390 start.go:167] duration metric: took 24.71272858s to libmachine.API.Create "ha-957517"
	I0831 22:28:51.933020   32390 start.go:293] postStartSetup for "ha-957517-m02" (driver="kvm2")
	I0831 22:28:51.933029   32390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:28:51.933044   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:51.933279   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:28:51.933303   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.935189   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.935479   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.935507   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.935649   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.935796   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.935948   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.936037   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:52.021581   32390 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:28:52.026029   32390 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:28:52.026052   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:28:52.026177   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:28:52.026304   32390 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:28:52.026317   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:28:52.026427   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:28:52.036192   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:52.062145   32390 start.go:296] duration metric: took 129.114548ms for postStartSetup
	I0831 22:28:52.062184   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetConfigRaw
	I0831 22:28:52.062691   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:52.065694   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.066141   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.066168   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.066459   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:52.066633   32390 start.go:128] duration metric: took 24.864364924s to createHost
	I0831 22:28:52.066652   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:52.068944   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.069321   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.069350   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.069533   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:52.069755   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.069924   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.070092   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:52.070283   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:52.070504   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:52.070520   32390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:28:52.184296   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143332.160271398
	
	I0831 22:28:52.184316   32390 fix.go:216] guest clock: 1725143332.160271398
	I0831 22:28:52.184322   32390 fix.go:229] Guest: 2024-08-31 22:28:52.160271398 +0000 UTC Remote: 2024-08-31 22:28:52.066642729 +0000 UTC m=+71.155408944 (delta=93.628669ms)
	I0831 22:28:52.184336   32390 fix.go:200] guest clock delta is within tolerance: 93.628669ms
	I0831 22:28:52.184340   32390 start.go:83] releasing machines lock for "ha-957517-m02", held for 24.982161706s
	I0831 22:28:52.184355   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.184586   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:52.187347   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.187705   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.187725   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.189995   32390 out.go:177] * Found network options:
	I0831 22:28:52.191454   32390 out.go:177]   - NO_PROXY=192.168.39.137
	W0831 22:28:52.192882   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:28:52.192907   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.193396   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.193585   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.193695   32390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:28:52.193732   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	W0831 22:28:52.193825   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:28:52.193881   32390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:28:52.193897   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:52.196379   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.196622   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.196690   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.196713   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.196823   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:52.196986   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.197143   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.197160   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:52.197175   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.197270   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:52.197342   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:52.197441   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.197579   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:52.197698   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:52.440877   32390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:28:52.447656   32390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:28:52.447717   32390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:28:52.464132   32390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:28:52.464153   32390 start.go:495] detecting cgroup driver to use...
	I0831 22:28:52.464210   32390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:28:52.481918   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:28:52.495851   32390 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:28:52.495906   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:28:52.509527   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:28:52.522517   32390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:28:52.638789   32390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:28:52.796162   32390 docker.go:233] disabling docker service ...
	I0831 22:28:52.796229   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:28:52.810377   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:28:52.823253   32390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:28:52.934707   32390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:28:53.047800   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:28:53.063463   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:28:53.081704   32390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:28:53.081764   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.091965   32390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:28:53.092024   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.102695   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.114994   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.126800   32390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:28:53.137222   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.147123   32390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.164244   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.173764   32390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:28:53.182563   32390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:28:53.182608   32390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:28:53.194444   32390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:28:53.203288   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:53.314804   32390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:28:53.414716   32390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:28:53.414790   32390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:28:53.419842   32390 start.go:563] Will wait 60s for crictl version
	I0831 22:28:53.419894   32390 ssh_runner.go:195] Run: which crictl
	I0831 22:28:53.423434   32390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:28:53.458924   32390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:28:53.458999   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:53.486355   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:53.514411   32390 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:28:53.515852   32390 out.go:177]   - env NO_PROXY=192.168.39.137
	I0831 22:28:53.517106   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:53.519492   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:53.519912   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:53.519934   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:53.520098   32390 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:28:53.523933   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:53.535737   32390 mustload.go:65] Loading cluster: ha-957517
	I0831 22:28:53.535907   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:53.536140   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:53.536182   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:53.550774   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0831 22:28:53.551178   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:53.551600   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:53.551621   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:53.551893   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:53.552045   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:53.553598   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:53.553888   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:53.553932   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:53.568671   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0831 22:28:53.569210   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:53.569685   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:53.569708   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:53.570070   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:53.570277   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:53.570462   32390 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.61
	I0831 22:28:53.570472   32390 certs.go:194] generating shared ca certs ...
	I0831 22:28:53.570489   32390 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:53.570633   32390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:28:53.570683   32390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:28:53.570694   32390 certs.go:256] generating profile certs ...
	I0831 22:28:53.570778   32390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:28:53.570809   32390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2
	I0831 22:28:53.570827   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.254]
	I0831 22:28:53.710539   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2 ...
	I0831 22:28:53.710563   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2: {Name:mk538af76639062ba338a47a4d807743b9ff5577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:53.710720   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2 ...
	I0831 22:28:53.710733   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2: {Name:mk009c0022cdeda046304ef0899ed335a9aeb360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:53.710799   32390 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:28:53.710920   32390 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:28:53.711037   32390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:28:53.711051   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:28:53.711063   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:28:53.711077   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:28:53.711090   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:28:53.711102   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:28:53.711114   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:28:53.711126   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:28:53.711137   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:28:53.711181   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:28:53.711208   32390 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:28:53.711219   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:28:53.711242   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:28:53.711261   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:28:53.711283   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:28:53.711319   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:53.711365   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:28:53.711379   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:53.711392   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:28:53.711422   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:53.714506   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:53.714911   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:53.714938   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:53.715082   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:53.715321   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:53.715480   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:53.715587   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:53.783633   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0831 22:28:53.788356   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0831 22:28:53.799627   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0831 22:28:53.804433   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0831 22:28:53.816674   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0831 22:28:53.821486   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0831 22:28:53.832363   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0831 22:28:53.837054   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0831 22:28:53.852852   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0831 22:28:53.857503   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0831 22:28:53.868022   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0831 22:28:53.872537   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0831 22:28:53.884015   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:28:53.909663   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:28:53.933495   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:28:53.957903   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:28:53.981855   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0831 22:28:54.005508   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:28:54.029675   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:28:54.053280   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:28:54.076641   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:28:54.101006   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:28:54.124523   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:28:54.147377   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0831 22:28:54.163427   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0831 22:28:54.179408   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0831 22:28:54.195690   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0831 22:28:54.211905   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0831 22:28:54.228975   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0831 22:28:54.245786   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0831 22:28:54.263032   32390 ssh_runner.go:195] Run: openssl version
	I0831 22:28:54.268756   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:28:54.279736   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:28:54.284315   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:28:54.284363   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:28:54.290270   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:28:54.300756   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:28:54.311469   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:54.315809   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:54.315871   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:54.321315   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:28:54.331911   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:28:54.342943   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:28:54.347210   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:28:54.347254   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:28:54.352716   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:28:54.363082   32390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:28:54.366986   32390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:28:54.367033   32390 kubeadm.go:934] updating node {m02 192.168.39.61 8443 v1.31.0 crio true true} ...
	I0831 22:28:54.367114   32390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:28:54.367144   32390 kube-vip.go:115] generating kube-vip config ...
	I0831 22:28:54.367184   32390 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:28:54.382329   32390 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:28:54.382415   32390 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:28:54.382474   32390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:28:54.394131   32390 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0831 22:28:54.394185   32390 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0831 22:28:54.405229   32390 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0831 22:28:54.405261   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0831 22:28:54.405293   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:28:54.405268   32390 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0831 22:28:54.405389   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:28:54.409887   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0831 22:28:54.409911   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0831 22:28:55.821700   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:28:55.836367   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:28:55.836466   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:28:55.841577   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0831 22:28:55.841617   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0831 22:28:58.430088   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:28:58.430192   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:28:58.434955   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0831 22:28:58.434987   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0831 22:28:58.692031   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0831 22:28:58.701402   32390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:28:58.718672   32390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:28:58.734906   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:28:58.751100   32390 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:28:58.754853   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:58.766760   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:58.896996   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:28:58.914840   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:58.915281   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:58.915350   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:58.931278   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0831 22:28:58.931717   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:58.932301   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:58.932328   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:58.932622   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:58.932846   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:58.933002   32390 start.go:317] joinCluster: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:28:58.933127   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0831 22:28:58.933151   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:58.935853   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:58.936229   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:58.936260   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:58.936392   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:58.936571   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:58.936735   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:58.936856   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:59.097280   32390 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:28:59.097330   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vcedyv.uh9p93wlnbwgapwi --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m02 --control-plane --apiserver-advertise-address=192.168.39.61 --apiserver-bind-port=8443"
	I0831 22:29:19.350134   32390 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vcedyv.uh9p93wlnbwgapwi --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m02 --control-plane --apiserver-advertise-address=192.168.39.61 --apiserver-bind-port=8443": (20.252769056s)
	I0831 22:29:19.350173   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0831 22:29:19.758953   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957517-m02 minikube.k8s.io/updated_at=2024_08_31T22_29_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=ha-957517 minikube.k8s.io/primary=false
	I0831 22:29:19.880353   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957517-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0831 22:29:20.003429   32390 start.go:319] duration metric: took 21.070424201s to joinCluster
	I0831 22:29:20.003509   32390 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:29:20.003771   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:29:20.004872   32390 out.go:177] * Verifying Kubernetes components...
	I0831 22:29:20.006079   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:29:20.341264   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:29:20.410688   32390 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:29:20.410894   32390 kapi.go:59] client config for ha-957517: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key", CAFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f192a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 22:29:20.410949   32390 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.137:8443
	I0831 22:29:20.411138   32390 node_ready.go:35] waiting up to 6m0s for node "ha-957517-m02" to be "Ready" ...
	I0831 22:29:20.411220   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:20.411227   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:20.411234   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:20.411239   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:20.423777   32390 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0831 22:29:20.911667   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:20.911693   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:20.911702   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:20.911708   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:20.921243   32390 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0831 22:29:21.412056   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:21.412073   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:21.412082   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:21.412085   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:21.416001   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:21.912280   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:21.912305   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:21.912316   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:21.912323   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:21.915704   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:22.411589   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:22.411608   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:22.411616   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:22.411622   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:22.414601   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:22.415221   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:22.911525   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:22.911546   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:22.911554   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:22.911559   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:22.915237   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:23.411475   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:23.411496   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:23.411504   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:23.411510   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:23.415282   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:23.912278   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:23.912302   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:23.912313   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:23.912321   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:23.915933   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:24.412277   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:24.412303   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:24.412315   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:24.412319   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:24.415947   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:24.416488   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:24.911942   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:24.911967   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:24.911978   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:24.911985   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:24.915879   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:25.412038   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:25.412068   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:25.412079   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:25.412085   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:25.415941   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:25.912318   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:25.912339   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:25.912347   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:25.912352   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:25.915698   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:26.411682   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:26.411703   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:26.411713   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:26.411720   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:26.415139   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:26.911444   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:26.911473   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:26.911483   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:26.911489   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:26.914977   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:26.915831   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:27.412252   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:27.412272   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:27.412280   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:27.412284   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:27.557980   32390 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I0831 22:29:27.912255   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:27.912282   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:27.912292   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:27.912296   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:27.915720   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:28.411502   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:28.411530   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:28.411542   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:28.411549   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:28.415301   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:28.912121   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:28.912150   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:28.912160   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:28.912166   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:28.915450   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:28.916479   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:29.412345   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:29.412367   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:29.412378   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:29.412384   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:29.416248   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:29.911417   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:29.911440   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:29.911448   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:29.911453   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:29.914597   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:30.411685   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:30.411706   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:30.411717   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:30.411721   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:30.414912   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:30.911979   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:30.912005   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:30.912015   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:30.912022   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:30.915304   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:31.412093   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:31.412121   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:31.412137   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:31.412142   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:31.415509   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:31.416030   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:31.911484   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:31.911513   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:31.911524   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:31.911529   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:31.915114   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:32.411662   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:32.411685   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:32.411693   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:32.411696   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:32.415131   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:32.912217   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:32.912237   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:32.912245   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:32.912251   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:32.915718   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:33.411633   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:33.411656   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:33.411667   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:33.411673   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:33.414773   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:33.911723   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:33.911741   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:33.911749   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:33.911753   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:33.914906   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:33.915623   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:34.411358   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:34.411379   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:34.411390   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:34.411394   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:34.415548   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:29:34.911534   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:34.911563   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:34.911573   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:34.911581   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:34.914824   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:35.412127   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:35.412158   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:35.412169   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:35.412175   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:35.415546   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:35.911393   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:35.911416   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:35.911426   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:35.911433   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:35.914627   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:36.411478   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:36.411499   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:36.411507   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:36.411511   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:36.414430   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:36.414833   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:36.912255   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:36.912278   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:36.912287   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:36.912292   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:36.915979   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:37.411305   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:37.411341   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:37.411354   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:37.411361   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:37.415106   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:37.912299   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:37.912324   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:37.912333   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:37.912338   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:37.916027   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:38.412221   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:38.412260   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:38.412272   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:38.412278   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:38.417429   32390 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 22:29:38.417935   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:38.912150   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:38.912177   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:38.912188   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:38.912195   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:38.915847   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:39.411431   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:39.411456   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:39.411468   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:39.411476   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:39.414700   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:39.911707   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:39.911727   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:39.911735   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:39.911738   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:39.915458   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.411303   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:40.411336   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.411347   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.411352   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.414737   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.415173   32390 node_ready.go:49] node "ha-957517-m02" has status "Ready":"True"
	I0831 22:29:40.415189   32390 node_ready.go:38] duration metric: took 20.004037422s for node "ha-957517-m02" to be "Ready" ...
	I0831 22:29:40.415197   32390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:29:40.415262   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:40.415271   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.415276   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.415282   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.419154   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.425091   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.425161   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-k7rsc
	I0831 22:29:40.425172   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.425179   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.425184   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.428475   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.429393   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.429406   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.429412   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.429418   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.431707   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.432466   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.432482   32390 pod_ready.go:82] duration metric: took 7.368991ms for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.432490   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.432542   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-pc7gn
	I0831 22:29:40.432551   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.432557   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.432562   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.434825   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.435308   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.435321   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.435347   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.435351   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.437687   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.438190   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.438212   32390 pod_ready.go:82] duration metric: took 5.714169ms for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.438223   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.438280   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517
	I0831 22:29:40.438291   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.438300   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.438309   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.440974   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.441951   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.441965   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.441972   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.441975   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.444856   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.445439   32390 pod_ready.go:93] pod "etcd-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.445460   32390 pod_ready.go:82] duration metric: took 7.229121ms for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.445473   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.445536   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m02
	I0831 22:29:40.445546   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.445555   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.445564   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.447802   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.448512   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:40.448529   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.448539   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.448544   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.450706   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.451250   32390 pod_ready.go:93] pod "etcd-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.451269   32390 pod_ready.go:82] duration metric: took 5.788447ms for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.451288   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.611665   32390 request.go:632] Waited for 160.321918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:29:40.611739   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:29:40.611748   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.611764   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.611768   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.615193   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.812257   32390 request.go:632] Waited for 196.336667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.812324   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.812330   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.812337   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.812341   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.816056   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.816752   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.816783   32390 pod_ready.go:82] duration metric: took 365.483332ms for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.816797   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.012258   32390 request.go:632] Waited for 195.392394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:29:41.012309   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:29:41.012327   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.012339   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.012345   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.015816   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.212008   32390 request.go:632] Waited for 195.320702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:41.212064   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:41.212069   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.212076   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.212081   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.215234   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.215969   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:41.215984   32390 pod_ready.go:82] duration metric: took 399.177722ms for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.215993   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.412160   32390 request.go:632] Waited for 196.097047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:29:41.412222   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:29:41.412228   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.412235   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.412239   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.415704   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.611676   32390 request.go:632] Waited for 195.374996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:41.611726   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:41.611731   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.611738   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.611742   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.615175   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.615766   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:41.615783   32390 pod_ready.go:82] duration metric: took 399.784074ms for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.615793   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.811944   32390 request.go:632] Waited for 196.095531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:29:41.812016   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:29:41.812023   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.812033   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.812039   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.815339   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.011433   32390 request.go:632] Waited for 195.308047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.011488   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.011493   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.011501   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.011504   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.015258   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.015820   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:42.015837   32390 pod_ready.go:82] duration metric: took 400.038293ms for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.015847   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.211981   32390 request.go:632] Waited for 196.066436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:29:42.212063   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:29:42.212068   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.212078   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.212084   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.215281   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.411392   32390 request.go:632] Waited for 195.419289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.411447   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.411454   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.411461   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.411465   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.414825   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.415301   32390 pod_ready.go:93] pod "kube-proxy-dvpbk" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:42.415348   32390 pod_ready.go:82] duration metric: took 399.466629ms for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.415364   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.611375   32390 request.go:632] Waited for 195.917329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:29:42.611433   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:29:42.611441   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.611449   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.611455   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.614965   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.812169   32390 request.go:632] Waited for 196.361735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:42.812234   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:42.812241   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.812251   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.812256   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.814686   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:42.815209   32390 pod_ready.go:93] pod "kube-proxy-xrp64" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:42.815225   32390 pod_ready.go:82] duration metric: took 399.854298ms for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.815234   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.012334   32390 request.go:632] Waited for 197.047061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:29:43.012411   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:29:43.012419   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.012429   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.012439   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.015831   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.211770   32390 request.go:632] Waited for 195.377614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:43.211833   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:43.211841   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.211853   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.211858   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.215022   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.215784   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:43.215804   32390 pod_ready.go:82] duration metric: took 400.564003ms for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.215822   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.412010   32390 request.go:632] Waited for 196.11497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:29:43.412066   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:29:43.412071   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.412078   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.412083   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.415261   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.611791   32390 request.go:632] Waited for 195.874911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:43.611872   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:43.611879   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.611892   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.611902   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.615561   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.616048   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:43.616066   32390 pod_ready.go:82] duration metric: took 400.236887ms for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.616077   32390 pod_ready.go:39] duration metric: took 3.200871491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:29:43.616094   32390 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:29:43.616140   32390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:29:43.632526   32390 api_server.go:72] duration metric: took 23.628979508s to wait for apiserver process to appear ...
	I0831 22:29:43.632555   32390 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:29:43.632576   32390 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0831 22:29:43.637074   32390 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0831 22:29:43.637137   32390 round_trippers.go:463] GET https://192.168.39.137:8443/version
	I0831 22:29:43.637153   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.637160   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.637170   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.638004   32390 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0831 22:29:43.638106   32390 api_server.go:141] control plane version: v1.31.0
	I0831 22:29:43.638124   32390 api_server.go:131] duration metric: took 5.56316ms to wait for apiserver health ...
	I0831 22:29:43.638134   32390 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:29:43.811504   32390 request.go:632] Waited for 173.287765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:43.811593   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:43.811601   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.811612   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.811620   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.817744   32390 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 22:29:43.822317   32390 system_pods.go:59] 17 kube-system pods found
	I0831 22:29:43.822348   32390 system_pods.go:61] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:29:43.822353   32390 system_pods.go:61] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:29:43.822358   32390 system_pods.go:61] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:29:43.822364   32390 system_pods.go:61] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:29:43.822373   32390 system_pods.go:61] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:29:43.822378   32390 system_pods.go:61] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:29:43.822383   32390 system_pods.go:61] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:29:43.822390   32390 system_pods.go:61] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:29:43.822396   32390 system_pods.go:61] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:29:43.822400   32390 system_pods.go:61] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:29:43.822404   32390 system_pods.go:61] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:29:43.822410   32390 system_pods.go:61] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:29:43.822414   32390 system_pods.go:61] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:29:43.822418   32390 system_pods.go:61] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:29:43.822421   32390 system_pods.go:61] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:29:43.822424   32390 system_pods.go:61] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:29:43.822427   32390 system_pods.go:61] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:29:43.822436   32390 system_pods.go:74] duration metric: took 184.288863ms to wait for pod list to return data ...
	I0831 22:29:43.822445   32390 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:29:44.011541   32390 request.go:632] Waited for 189.016326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:29:44.011613   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:29:44.011619   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:44.011626   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:44.011630   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:44.015633   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:44.015890   32390 default_sa.go:45] found service account: "default"
	I0831 22:29:44.015913   32390 default_sa.go:55] duration metric: took 193.460938ms for default service account to be created ...
	I0831 22:29:44.015922   32390 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:29:44.211279   32390 request.go:632] Waited for 195.286649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:44.211381   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:44.211388   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:44.211395   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:44.211402   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:44.216223   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:29:44.220704   32390 system_pods.go:86] 17 kube-system pods found
	I0831 22:29:44.220726   32390 system_pods.go:89] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:29:44.220732   32390 system_pods.go:89] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:29:44.220736   32390 system_pods.go:89] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:29:44.220740   32390 system_pods.go:89] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:29:44.220744   32390 system_pods.go:89] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:29:44.220750   32390 system_pods.go:89] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:29:44.220755   32390 system_pods.go:89] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:29:44.220760   32390 system_pods.go:89] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:29:44.220766   32390 system_pods.go:89] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:29:44.220774   32390 system_pods.go:89] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:29:44.220780   32390 system_pods.go:89] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:29:44.220788   32390 system_pods.go:89] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:29:44.220794   32390 system_pods.go:89] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:29:44.220799   32390 system_pods.go:89] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:29:44.220805   32390 system_pods.go:89] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:29:44.220808   32390 system_pods.go:89] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:29:44.220814   32390 system_pods.go:89] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:29:44.220821   32390 system_pods.go:126] duration metric: took 204.892952ms to wait for k8s-apps to be running ...
	I0831 22:29:44.220830   32390 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:29:44.220880   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:29:44.236893   32390 system_svc.go:56] duration metric: took 16.05511ms WaitForService to wait for kubelet
	I0831 22:29:44.236916   32390 kubeadm.go:582] duration metric: took 24.233376408s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:29:44.236935   32390 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:29:44.412338   32390 request.go:632] Waited for 175.326713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes
	I0831 22:29:44.412418   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes
	I0831 22:29:44.412429   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:44.412437   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:44.412442   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:44.415996   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:44.416895   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:29:44.416923   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:29:44.416947   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:29:44.416955   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:29:44.416961   32390 node_conditions.go:105] duration metric: took 180.022322ms to run NodePressure ...
	I0831 22:29:44.416977   32390 start.go:241] waiting for startup goroutines ...
	I0831 22:29:44.417005   32390 start.go:255] writing updated cluster config ...
	I0831 22:29:44.419190   32390 out.go:201] 
	I0831 22:29:44.420858   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:29:44.420943   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:29:44.422660   32390 out.go:177] * Starting "ha-957517-m03" control-plane node in "ha-957517" cluster
	I0831 22:29:44.423897   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:29:44.423921   32390 cache.go:56] Caching tarball of preloaded images
	I0831 22:29:44.424026   32390 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:29:44.424037   32390 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:29:44.424145   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:29:44.424311   32390 start.go:360] acquireMachinesLock for ha-957517-m03: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:29:44.424354   32390 start.go:364] duration metric: took 24.425µs to acquireMachinesLock for "ha-957517-m03"
	I0831 22:29:44.424367   32390 start.go:93] Provisioning new machine with config: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:29:44.424457   32390 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0831 22:29:44.426128   32390 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 22:29:44.426221   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:29:44.426255   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:29:44.440856   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0831 22:29:44.441305   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:29:44.441754   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:29:44.441776   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:29:44.442024   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:29:44.442213   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:29:44.442358   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:29:44.442524   32390 start.go:159] libmachine.API.Create for "ha-957517" (driver="kvm2")
	I0831 22:29:44.442552   32390 client.go:168] LocalClient.Create starting
	I0831 22:29:44.442584   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:29:44.442620   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:29:44.442644   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:29:44.442708   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:29:44.442737   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:29:44.442754   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:29:44.442779   32390 main.go:141] libmachine: Running pre-create checks...
	I0831 22:29:44.442791   32390 main.go:141] libmachine: (ha-957517-m03) Calling .PreCreateCheck
	I0831 22:29:44.442939   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetConfigRaw
	I0831 22:29:44.443271   32390 main.go:141] libmachine: Creating machine...
	I0831 22:29:44.443285   32390 main.go:141] libmachine: (ha-957517-m03) Calling .Create
	I0831 22:29:44.443409   32390 main.go:141] libmachine: (ha-957517-m03) Creating KVM machine...
	I0831 22:29:44.444581   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found existing default KVM network
	I0831 22:29:44.444707   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found existing private KVM network mk-ha-957517
	I0831 22:29:44.444803   32390 main.go:141] libmachine: (ha-957517-m03) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03 ...
	I0831 22:29:44.444830   32390 main.go:141] libmachine: (ha-957517-m03) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:29:44.444890   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.444811   33157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:29:44.444984   32390 main.go:141] libmachine: (ha-957517-m03) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:29:44.667359   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.667216   33157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa...
	I0831 22:29:44.783983   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.783875   33157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/ha-957517-m03.rawdisk...
	I0831 22:29:44.784016   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Writing magic tar header
	I0831 22:29:44.784034   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Writing SSH key tar header
	I0831 22:29:44.784046   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.783987   33157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03 ...
	I0831 22:29:44.784107   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03
	I0831 22:29:44.784135   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:29:44.784156   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03 (perms=drwx------)
	I0831 22:29:44.784170   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:29:44.784187   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:29:44.784200   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:29:44.784215   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:29:44.784232   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:29:44.784245   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:29:44.784265   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:29:44.784279   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:29:44.784295   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home
	I0831 22:29:44.784307   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:29:44.784323   32390 main.go:141] libmachine: (ha-957517-m03) Creating domain...
	I0831 22:29:44.784339   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Skipping /home - not owner
	I0831 22:29:44.785218   32390 main.go:141] libmachine: (ha-957517-m03) define libvirt domain using xml: 
	I0831 22:29:44.785241   32390 main.go:141] libmachine: (ha-957517-m03) <domain type='kvm'>
	I0831 22:29:44.785249   32390 main.go:141] libmachine: (ha-957517-m03)   <name>ha-957517-m03</name>
	I0831 22:29:44.785257   32390 main.go:141] libmachine: (ha-957517-m03)   <memory unit='MiB'>2200</memory>
	I0831 22:29:44.785264   32390 main.go:141] libmachine: (ha-957517-m03)   <vcpu>2</vcpu>
	I0831 22:29:44.785268   32390 main.go:141] libmachine: (ha-957517-m03)   <features>
	I0831 22:29:44.785273   32390 main.go:141] libmachine: (ha-957517-m03)     <acpi/>
	I0831 22:29:44.785281   32390 main.go:141] libmachine: (ha-957517-m03)     <apic/>
	I0831 22:29:44.785292   32390 main.go:141] libmachine: (ha-957517-m03)     <pae/>
	I0831 22:29:44.785302   32390 main.go:141] libmachine: (ha-957517-m03)     
	I0831 22:29:44.785313   32390 main.go:141] libmachine: (ha-957517-m03)   </features>
	I0831 22:29:44.785323   32390 main.go:141] libmachine: (ha-957517-m03)   <cpu mode='host-passthrough'>
	I0831 22:29:44.785341   32390 main.go:141] libmachine: (ha-957517-m03)   
	I0831 22:29:44.785354   32390 main.go:141] libmachine: (ha-957517-m03)   </cpu>
	I0831 22:29:44.785364   32390 main.go:141] libmachine: (ha-957517-m03)   <os>
	I0831 22:29:44.785375   32390 main.go:141] libmachine: (ha-957517-m03)     <type>hvm</type>
	I0831 22:29:44.785388   32390 main.go:141] libmachine: (ha-957517-m03)     <boot dev='cdrom'/>
	I0831 22:29:44.785398   32390 main.go:141] libmachine: (ha-957517-m03)     <boot dev='hd'/>
	I0831 22:29:44.785410   32390 main.go:141] libmachine: (ha-957517-m03)     <bootmenu enable='no'/>
	I0831 22:29:44.785420   32390 main.go:141] libmachine: (ha-957517-m03)   </os>
	I0831 22:29:44.785448   32390 main.go:141] libmachine: (ha-957517-m03)   <devices>
	I0831 22:29:44.785468   32390 main.go:141] libmachine: (ha-957517-m03)     <disk type='file' device='cdrom'>
	I0831 22:29:44.785478   32390 main.go:141] libmachine: (ha-957517-m03)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/boot2docker.iso'/>
	I0831 22:29:44.785488   32390 main.go:141] libmachine: (ha-957517-m03)       <target dev='hdc' bus='scsi'/>
	I0831 22:29:44.785504   32390 main.go:141] libmachine: (ha-957517-m03)       <readonly/>
	I0831 22:29:44.785520   32390 main.go:141] libmachine: (ha-957517-m03)     </disk>
	I0831 22:29:44.785536   32390 main.go:141] libmachine: (ha-957517-m03)     <disk type='file' device='disk'>
	I0831 22:29:44.785555   32390 main.go:141] libmachine: (ha-957517-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:29:44.785572   32390 main.go:141] libmachine: (ha-957517-m03)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/ha-957517-m03.rawdisk'/>
	I0831 22:29:44.785584   32390 main.go:141] libmachine: (ha-957517-m03)       <target dev='hda' bus='virtio'/>
	I0831 22:29:44.785594   32390 main.go:141] libmachine: (ha-957517-m03)     </disk>
	I0831 22:29:44.785605   32390 main.go:141] libmachine: (ha-957517-m03)     <interface type='network'>
	I0831 22:29:44.785618   32390 main.go:141] libmachine: (ha-957517-m03)       <source network='mk-ha-957517'/>
	I0831 22:29:44.785627   32390 main.go:141] libmachine: (ha-957517-m03)       <model type='virtio'/>
	I0831 22:29:44.785640   32390 main.go:141] libmachine: (ha-957517-m03)     </interface>
	I0831 22:29:44.785655   32390 main.go:141] libmachine: (ha-957517-m03)     <interface type='network'>
	I0831 22:29:44.785668   32390 main.go:141] libmachine: (ha-957517-m03)       <source network='default'/>
	I0831 22:29:44.785676   32390 main.go:141] libmachine: (ha-957517-m03)       <model type='virtio'/>
	I0831 22:29:44.785687   32390 main.go:141] libmachine: (ha-957517-m03)     </interface>
	I0831 22:29:44.785696   32390 main.go:141] libmachine: (ha-957517-m03)     <serial type='pty'>
	I0831 22:29:44.785703   32390 main.go:141] libmachine: (ha-957517-m03)       <target port='0'/>
	I0831 22:29:44.785712   32390 main.go:141] libmachine: (ha-957517-m03)     </serial>
	I0831 22:29:44.785723   32390 main.go:141] libmachine: (ha-957517-m03)     <console type='pty'>
	I0831 22:29:44.785738   32390 main.go:141] libmachine: (ha-957517-m03)       <target type='serial' port='0'/>
	I0831 22:29:44.785749   32390 main.go:141] libmachine: (ha-957517-m03)     </console>
	I0831 22:29:44.785759   32390 main.go:141] libmachine: (ha-957517-m03)     <rng model='virtio'>
	I0831 22:29:44.785772   32390 main.go:141] libmachine: (ha-957517-m03)       <backend model='random'>/dev/random</backend>
	I0831 22:29:44.785781   32390 main.go:141] libmachine: (ha-957517-m03)     </rng>
	I0831 22:29:44.785786   32390 main.go:141] libmachine: (ha-957517-m03)     
	I0831 22:29:44.785794   32390 main.go:141] libmachine: (ha-957517-m03)     
	I0831 22:29:44.785803   32390 main.go:141] libmachine: (ha-957517-m03)   </devices>
	I0831 22:29:44.785812   32390 main.go:141] libmachine: (ha-957517-m03) </domain>
	I0831 22:29:44.785826   32390 main.go:141] libmachine: (ha-957517-m03) 
	I0831 22:29:44.792239   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:ee:c9:7b in network default
	I0831 22:29:44.792796   32390 main.go:141] libmachine: (ha-957517-m03) Ensuring networks are active...
	I0831 22:29:44.792815   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:44.793478   32390 main.go:141] libmachine: (ha-957517-m03) Ensuring network default is active
	I0831 22:29:44.793753   32390 main.go:141] libmachine: (ha-957517-m03) Ensuring network mk-ha-957517 is active
	I0831 22:29:44.794247   32390 main.go:141] libmachine: (ha-957517-m03) Getting domain xml...
	I0831 22:29:44.794923   32390 main.go:141] libmachine: (ha-957517-m03) Creating domain...
	I0831 22:29:46.018660   32390 main.go:141] libmachine: (ha-957517-m03) Waiting to get IP...
	I0831 22:29:46.019544   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.019918   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.019975   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.019928   33157 retry.go:31] will retry after 188.471058ms: waiting for machine to come up
	I0831 22:29:46.210289   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.210735   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.210757   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.210710   33157 retry.go:31] will retry after 266.957858ms: waiting for machine to come up
	I0831 22:29:46.479104   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.479524   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.479551   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.479483   33157 retry.go:31] will retry after 455.33176ms: waiting for machine to come up
	I0831 22:29:46.936036   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.936572   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.936599   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.936526   33157 retry.go:31] will retry after 567.079035ms: waiting for machine to come up
	I0831 22:29:47.505211   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:47.505670   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:47.505696   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:47.505633   33157 retry.go:31] will retry after 565.404588ms: waiting for machine to come up
	I0831 22:29:48.072964   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:48.073879   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:48.073907   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:48.073829   33157 retry.go:31] will retry after 901.14711ms: waiting for machine to come up
	I0831 22:29:48.976876   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:48.977333   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:48.977354   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:48.977294   33157 retry.go:31] will retry after 952.500278ms: waiting for machine to come up
	I0831 22:29:49.931405   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:49.931882   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:49.931909   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:49.931847   33157 retry.go:31] will retry after 896.313086ms: waiting for machine to come up
	I0831 22:29:50.829903   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:50.830367   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:50.830392   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:50.830340   33157 retry.go:31] will retry after 1.726862486s: waiting for machine to come up
	I0831 22:29:52.559146   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:52.559587   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:52.559617   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:52.559539   33157 retry.go:31] will retry after 1.792217096s: waiting for machine to come up
	I0831 22:29:54.353025   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:54.353502   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:54.353532   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:54.353446   33157 retry.go:31] will retry after 2.567340298s: waiting for machine to come up
	I0831 22:29:56.922225   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:56.922595   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:56.922629   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:56.922585   33157 retry.go:31] will retry after 3.025143911s: waiting for machine to come up
	I0831 22:29:59.949599   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:59.950025   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:59.950058   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:59.949976   33157 retry.go:31] will retry after 3.145761762s: waiting for machine to come up
	I0831 22:30:03.098803   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:03.099192   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:30:03.099220   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:30:03.099151   33157 retry.go:31] will retry after 5.518514687s: waiting for machine to come up
	I0831 22:30:08.622195   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.622695   32390 main.go:141] libmachine: (ha-957517-m03) Found IP for machine: 192.168.39.26
	I0831 22:30:08.622717   32390 main.go:141] libmachine: (ha-957517-m03) Reserving static IP address...
	I0831 22:30:08.622730   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has current primary IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.623147   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find host DHCP lease matching {name: "ha-957517-m03", mac: "52:54:00:5e:d5:49", ip: "192.168.39.26"} in network mk-ha-957517
	I0831 22:30:08.697760   32390 main.go:141] libmachine: (ha-957517-m03) Reserved static IP address: 192.168.39.26
	I0831 22:30:08.697781   32390 main.go:141] libmachine: (ha-957517-m03) Waiting for SSH to be available...
	I0831 22:30:08.697790   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Getting to WaitForSSH function...
	I0831 22:30:08.700520   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.700975   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:08.701007   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.701091   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Using SSH client type: external
	I0831 22:30:08.701120   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa (-rw-------)
	I0831 22:30:08.701167   32390 main.go:141] libmachine: (ha-957517-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:30:08.701188   32390 main.go:141] libmachine: (ha-957517-m03) DBG | About to run SSH command:
	I0831 22:30:08.701210   32390 main.go:141] libmachine: (ha-957517-m03) DBG | exit 0
	I0831 22:30:08.823670   32390 main.go:141] libmachine: (ha-957517-m03) DBG | SSH cmd err, output: <nil>: 
	I0831 22:30:08.823927   32390 main.go:141] libmachine: (ha-957517-m03) KVM machine creation complete!
	I0831 22:30:08.824318   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetConfigRaw
	I0831 22:30:08.824831   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:08.825067   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:08.825241   32390 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:30:08.825252   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:30:08.826809   32390 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:30:08.826826   32390 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:30:08.826834   32390 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:30:08.826843   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:08.829136   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.829600   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:08.829626   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.829803   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:08.829963   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.830121   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.830308   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:08.830495   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:08.830754   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:08.830768   32390 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:30:08.930973   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:30:08.930995   32390 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:30:08.931004   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:08.933860   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.934206   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:08.934234   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.934438   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:08.934624   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.934796   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.934921   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:08.935078   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:08.935240   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:08.935251   32390 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:30:09.032484   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:30:09.032577   32390 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:30:09.032594   32390 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:30:09.032603   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:30:09.032881   32390 buildroot.go:166] provisioning hostname "ha-957517-m03"
	I0831 22:30:09.032911   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:30:09.033090   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.035689   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.036112   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.036144   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.036296   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.036448   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.036561   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.036658   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.036844   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.037050   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.037067   32390 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517-m03 && echo "ha-957517-m03" | sudo tee /etc/hostname
	I0831 22:30:09.151226   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517-m03
	
	I0831 22:30:09.151259   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.154054   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.154443   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.154473   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.154629   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.154830   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.154991   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.155117   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.155284   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.155488   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.155504   32390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:30:09.265290   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:30:09.265326   32390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:30:09.265347   32390 buildroot.go:174] setting up certificates
	I0831 22:30:09.265357   32390 provision.go:84] configureAuth start
	I0831 22:30:09.265369   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:30:09.265655   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:09.268441   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.268855   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.268890   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.269082   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.271175   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.271490   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.271520   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.271641   32390 provision.go:143] copyHostCerts
	I0831 22:30:09.271677   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:30:09.271720   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:30:09.271737   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:30:09.271809   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:30:09.271888   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:30:09.271907   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:30:09.271914   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:30:09.271940   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:30:09.271985   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:30:09.272001   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:30:09.272007   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:30:09.272028   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:30:09.272079   32390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517-m03 san=[127.0.0.1 192.168.39.26 ha-957517-m03 localhost minikube]
	I0831 22:30:09.432938   32390 provision.go:177] copyRemoteCerts
	I0831 22:30:09.432994   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:30:09.433016   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.435571   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.435859   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.435890   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.436043   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.436226   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.436365   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.436497   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:09.518347   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:30:09.518435   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:30:09.544191   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:30:09.544280   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:30:09.569902   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:30:09.569978   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:30:09.595340   32390 provision.go:87] duration metric: took 329.950411ms to configureAuth
	I0831 22:30:09.595372   32390 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:30:09.595578   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:30:09.595647   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.598396   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.598877   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.598908   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.599078   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.599276   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.599484   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.599656   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.599788   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.599975   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.599990   32390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:30:09.819547   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:30:09.819575   32390 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:30:09.819585   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetURL
	I0831 22:30:09.820815   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Using libvirt version 6000000
	I0831 22:30:09.823079   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.823462   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.823491   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.823657   32390 main.go:141] libmachine: Docker is up and running!
	I0831 22:30:09.823674   32390 main.go:141] libmachine: Reticulating splines...
	I0831 22:30:09.823683   32390 client.go:171] duration metric: took 25.381122795s to LocalClient.Create
	I0831 22:30:09.823710   32390 start.go:167] duration metric: took 25.381187201s to libmachine.API.Create "ha-957517"
	I0831 22:30:09.823721   32390 start.go:293] postStartSetup for "ha-957517-m03" (driver="kvm2")
	I0831 22:30:09.823736   32390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:30:09.823758   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:09.824025   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:30:09.824052   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.826223   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.826556   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.826583   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.826720   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.826885   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.827040   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.827168   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:09.906472   32390 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:30:09.911007   32390 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:30:09.911034   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:30:09.911104   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:30:09.911213   32390 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:30:09.911225   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:30:09.911357   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:30:09.921606   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:30:09.950196   32390 start.go:296] duration metric: took 126.462079ms for postStartSetup
	I0831 22:30:09.950242   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetConfigRaw
	I0831 22:30:09.950835   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:09.953781   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.954146   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.954183   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.954461   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:30:09.954649   32390 start.go:128] duration metric: took 25.530183034s to createHost
	I0831 22:30:09.954673   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.956919   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.957196   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.957222   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.957359   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.957506   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.957628   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.957773   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.957908   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.958077   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.958086   32390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:30:10.056681   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143410.033508490
	
	I0831 22:30:10.056705   32390 fix.go:216] guest clock: 1725143410.033508490
	I0831 22:30:10.056717   32390 fix.go:229] Guest: 2024-08-31 22:30:10.03350849 +0000 UTC Remote: 2024-08-31 22:30:09.954660074 +0000 UTC m=+149.043426289 (delta=78.848416ms)
	I0831 22:30:10.056736   32390 fix.go:200] guest clock delta is within tolerance: 78.848416ms
	I0831 22:30:10.056743   32390 start.go:83] releasing machines lock for "ha-957517-m03", held for 25.63238216s
	I0831 22:30:10.056761   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.057037   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:10.059647   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.060036   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:10.060066   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.062714   32390 out.go:177] * Found network options:
	I0831 22:30:10.064732   32390 out.go:177]   - NO_PROXY=192.168.39.137,192.168.39.61
	W0831 22:30:10.066213   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 22:30:10.066241   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:30:10.066258   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.066963   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.067195   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.067314   32390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:30:10.067371   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	W0831 22:30:10.067489   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 22:30:10.067517   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:30:10.067586   32390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:30:10.067616   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:10.070260   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070451   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070620   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:10.070669   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070830   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:10.070851   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070860   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:10.071059   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:10.071093   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:10.071250   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:10.071266   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:10.071434   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:10.071438   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:10.071591   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:10.304386   32390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:30:10.310730   32390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:30:10.310802   32390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:30:10.329120   32390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:30:10.329151   32390 start.go:495] detecting cgroup driver to use...
	I0831 22:30:10.329223   32390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:30:10.346114   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:30:10.361295   32390 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:30:10.361360   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:30:10.375585   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:30:10.389748   32390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:30:10.508832   32390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:30:10.654279   32390 docker.go:233] disabling docker service ...
	I0831 22:30:10.654357   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:30:10.670019   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:30:10.684777   32390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:30:10.819832   32390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:30:10.949249   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:30:10.964959   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:30:10.983961   32390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:30:10.984026   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:10.995937   32390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:30:10.996003   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.009572   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.021077   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.032655   32390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:30:11.044442   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.056421   32390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.075569   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.087138   32390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:30:11.098703   32390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:30:11.098768   32390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:30:11.114721   32390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:30:11.127062   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:30:11.246987   32390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:30:11.340825   32390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:30:11.340901   32390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:30:11.346280   32390 start.go:563] Will wait 60s for crictl version
	I0831 22:30:11.346353   32390 ssh_runner.go:195] Run: which crictl
	I0831 22:30:11.350335   32390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:30:11.390222   32390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:30:11.390311   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:30:11.420458   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:30:11.451574   32390 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:30:11.452841   32390 out.go:177]   - env NO_PROXY=192.168.39.137
	I0831 22:30:11.454238   32390 out.go:177]   - env NO_PROXY=192.168.39.137,192.168.39.61
	I0831 22:30:11.455403   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:11.458308   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:11.458781   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:11.458818   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:11.459100   32390 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:30:11.463728   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:30:11.476818   32390 mustload.go:65] Loading cluster: ha-957517
	I0831 22:30:11.477069   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:30:11.477327   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:30:11.477375   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:30:11.492867   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0831 22:30:11.493293   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:30:11.493736   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:30:11.493754   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:30:11.494048   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:30:11.494252   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:30:11.495794   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:30:11.496076   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:30:11.496122   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:30:11.511012   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0831 22:30:11.511448   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:30:11.511933   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:30:11.511956   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:30:11.512264   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:30:11.512460   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:30:11.512631   32390 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.26
	I0831 22:30:11.512643   32390 certs.go:194] generating shared ca certs ...
	I0831 22:30:11.512657   32390 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:30:11.512787   32390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:30:11.512832   32390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:30:11.512841   32390 certs.go:256] generating profile certs ...
	I0831 22:30:11.512908   32390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:30:11.512934   32390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f
	I0831 22:30:11.512947   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.26 192.168.39.254]
	I0831 22:30:11.617566   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f ...
	I0831 22:30:11.617595   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f: {Name:mkc83f4cd90b98fa20d6a00874dcc873c13e5ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:30:11.617782   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f ...
	I0831 22:30:11.617796   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f: {Name:mkfc266e41c2031a162953cdbdca61197e3b8aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:30:11.617904   32390 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:30:11.618042   32390 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:30:11.618209   32390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:30:11.618226   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:30:11.618243   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:30:11.618257   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:30:11.618269   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:30:11.618281   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:30:11.618294   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:30:11.618305   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:30:11.618317   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:30:11.618366   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:30:11.618393   32390 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:30:11.618401   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:30:11.618422   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:30:11.618442   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:30:11.618466   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:30:11.618503   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:30:11.618528   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:11.618541   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:30:11.618553   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:30:11.618581   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:30:11.621676   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:11.622055   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:30:11.622079   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:11.622239   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:30:11.622470   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:30:11.622625   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:30:11.622772   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:30:11.699703   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0831 22:30:11.706252   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0831 22:30:11.720239   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0831 22:30:11.724731   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0831 22:30:11.736091   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0831 22:30:11.740441   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0831 22:30:11.750982   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0831 22:30:11.756133   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0831 22:30:11.768201   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0831 22:30:11.772564   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0831 22:30:11.783921   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0831 22:30:11.787891   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0831 22:30:11.799246   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:30:11.826642   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:30:11.855464   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:30:11.884492   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:30:11.912993   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0831 22:30:11.939431   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:30:11.964317   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:30:11.989006   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:30:12.013606   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:30:12.040296   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:30:12.064249   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:30:12.089686   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0831 22:30:12.108965   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0831 22:30:12.127712   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0831 22:30:12.148320   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0831 22:30:12.168568   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0831 22:30:12.187086   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0831 22:30:12.204466   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0831 22:30:12.222617   32390 ssh_runner.go:195] Run: openssl version
	I0831 22:30:12.228737   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:30:12.240426   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:30:12.245453   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:30:12.245503   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:30:12.251237   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:30:12.262117   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:30:12.272708   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:30:12.277124   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:30:12.277185   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:30:12.282772   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:30:12.293503   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:30:12.304508   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:12.309153   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:12.309206   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:12.322442   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:30:12.335035   32390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:30:12.339018   32390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:30:12.339065   32390 kubeadm.go:934] updating node {m03 192.168.39.26 8443 v1.31.0 crio true true} ...
	I0831 22:30:12.339136   32390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:30:12.339164   32390 kube-vip.go:115] generating kube-vip config ...
	I0831 22:30:12.339197   32390 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:30:12.357293   32390 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:30:12.357358   32390 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:30:12.357417   32390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:30:12.366929   32390 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0831 22:30:12.366976   32390 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0831 22:30:12.376334   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0831 22:30:12.376338   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0831 22:30:12.376356   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0831 22:30:12.376380   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:30:12.376387   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:30:12.376359   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:30:12.376459   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:30:12.376465   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:30:12.381168   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0831 22:30:12.381189   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0831 22:30:12.404499   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0831 22:30:12.404545   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0831 22:30:12.404589   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:30:12.404694   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:30:12.450536   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0831 22:30:12.450586   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0831 22:30:13.242909   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0831 22:30:13.253222   32390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:30:13.272112   32390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:30:13.289461   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:30:13.306177   32390 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:30:13.310622   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:30:13.323288   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:30:13.460174   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:30:13.478358   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:30:13.478684   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:30:13.478733   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:30:13.494270   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0831 22:30:13.494721   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:30:13.495175   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:30:13.495195   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:30:13.495546   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:30:13.495736   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:30:13.495915   32390 start.go:317] joinCluster: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:30:13.496070   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0831 22:30:13.496090   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:30:13.498768   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:13.499166   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:30:13.499194   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:13.499319   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:30:13.499515   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:30:13.499673   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:30:13.499806   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:30:13.651030   32390 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:30:13.651084   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ihyohl.5nvwjgxowwz1ejsy --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m03 --control-plane --apiserver-advertise-address=192.168.39.26 --apiserver-bind-port=8443"
	I0831 22:30:36.021355   32390 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ihyohl.5nvwjgxowwz1ejsy --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m03 --control-plane --apiserver-advertise-address=192.168.39.26 --apiserver-bind-port=8443": (22.370247548s)
	I0831 22:30:36.021389   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0831 22:30:36.666541   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957517-m03 minikube.k8s.io/updated_at=2024_08_31T22_30_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=ha-957517 minikube.k8s.io/primary=false
	I0831 22:30:36.782200   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957517-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0831 22:30:36.894655   32390 start.go:319] duration metric: took 23.398737337s to joinCluster
	I0831 22:30:36.894733   32390 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:30:36.895064   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:30:36.896743   32390 out.go:177] * Verifying Kubernetes components...
	I0831 22:30:36.898389   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:30:37.151123   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:30:37.181266   32390 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:30:37.181679   32390 kapi.go:59] client config for ha-957517: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key", CAFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f192a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 22:30:37.181764   32390 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.137:8443
	I0831 22:30:37.182062   32390 node_ready.go:35] waiting up to 6m0s for node "ha-957517-m03" to be "Ready" ...
	I0831 22:30:37.182151   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:37.182162   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:37.182176   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:37.182185   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:37.185908   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:37.683239   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:37.683262   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:37.683273   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:37.683277   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:37.687843   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:38.183119   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:38.183141   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:38.183148   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:38.183153   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:38.187159   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:38.682343   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:38.682373   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:38.682385   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:38.682391   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:38.686020   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:39.182624   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:39.182649   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:39.182660   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:39.182666   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:39.185813   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:39.186458   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:39.683261   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:39.683286   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:39.683294   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:39.683300   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:39.686703   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:40.182678   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:40.182705   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:40.182715   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:40.182720   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:40.186456   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:40.682552   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:40.682572   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:40.682580   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:40.682583   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:40.687031   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:41.182626   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:41.182647   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:41.182653   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:41.182656   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:41.186239   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:41.186889   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:41.683085   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:41.683111   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:41.683123   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:41.683127   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:41.687320   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:42.182442   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:42.182467   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:42.182479   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:42.182485   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:42.185704   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:42.683173   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:42.683196   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:42.683206   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:42.683211   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:42.686679   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:43.182706   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:43.182728   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:43.182739   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:43.182743   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:43.186197   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:43.682319   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:43.682339   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:43.682348   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:43.682354   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:43.685892   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:43.686914   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:44.182675   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:44.182698   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:44.182708   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:44.182712   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:44.186543   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:44.683099   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:44.683119   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:44.683127   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:44.683132   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:44.686468   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:45.182558   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:45.182581   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:45.182592   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:45.182598   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:45.186214   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:45.682223   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:45.682242   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:45.682251   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:45.682255   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:45.686437   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:45.687048   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:46.182832   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:46.182857   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:46.182866   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:46.182872   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:46.186283   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:46.683105   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:46.683130   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:46.683138   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:46.683143   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:46.686663   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:47.182596   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:47.182617   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:47.182624   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:47.182628   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:47.186056   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:47.682514   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:47.682541   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:47.682552   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:47.682560   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:47.686089   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:48.182262   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:48.182282   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:48.182296   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:48.182300   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:48.185340   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:48.185861   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:48.683345   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:48.683369   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:48.683381   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:48.683387   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:48.686730   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:49.182208   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:49.182227   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:49.182236   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:49.182240   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:49.184998   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:49.682281   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:49.682304   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:49.682311   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:49.682316   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:49.685738   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:50.182436   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:50.182459   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:50.182466   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:50.182470   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:50.185718   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:50.186153   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:50.682526   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:50.682547   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:50.682555   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:50.682558   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:50.685921   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:51.182587   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:51.182610   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:51.182619   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:51.182626   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:51.186039   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:51.682731   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:51.682753   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:51.682761   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:51.682764   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:51.686183   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:52.183178   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:52.183205   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:52.183216   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:52.183222   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:52.186501   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:52.187171   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:52.682975   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:52.683002   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:52.683014   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:52.683020   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:52.686693   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:53.182902   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:53.182922   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:53.182930   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:53.182933   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:53.186625   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:53.682801   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:53.682823   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:53.682831   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:53.682835   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:53.686883   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:54.182740   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:54.182765   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:54.182773   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:54.182777   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:54.186180   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:54.682742   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:54.682765   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:54.682773   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:54.682777   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:54.686309   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:54.687039   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:55.182332   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:55.182361   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:55.182369   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:55.182375   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:55.185743   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:55.682927   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:55.682952   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:55.682960   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:55.682964   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:55.686522   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.182258   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:56.182280   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.182288   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.182291   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.186282   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.683068   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:56.683089   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.683112   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.683116   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.686477   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.687105   32390 node_ready.go:49] node "ha-957517-m03" has status "Ready":"True"
	I0831 22:30:56.687130   32390 node_ready.go:38] duration metric: took 19.505042541s for node "ha-957517-m03" to be "Ready" ...
	I0831 22:30:56.687150   32390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:30:56.687265   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:30:56.687280   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.687288   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.687291   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.692867   32390 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 22:30:56.699462   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.699536   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-k7rsc
	I0831 22:30:56.699547   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.699559   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.699571   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.702651   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.703215   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:56.703228   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.703236   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.703239   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.705694   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.706135   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.706150   32390 pod_ready.go:82] duration metric: took 6.667795ms for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.706158   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.706202   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-pc7gn
	I0831 22:30:56.706209   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.706216   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.706222   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.708870   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.709768   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:56.709781   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.709790   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.709794   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.712066   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.712571   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.712584   32390 pod_ready.go:82] duration metric: took 6.4208ms for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.712592   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.712633   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517
	I0831 22:30:56.712640   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.712646   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.712653   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.714854   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.715364   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:56.715378   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.715385   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.715390   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.717242   32390 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0831 22:30:56.717667   32390 pod_ready.go:93] pod "etcd-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.717681   32390 pod_ready.go:82] duration metric: took 5.081377ms for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.717692   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.717783   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m02
	I0831 22:30:56.717794   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.717804   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.717812   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.720147   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.720868   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:56.720887   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.720898   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.720903   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.723247   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.723851   32390 pod_ready.go:93] pod "etcd-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.723868   32390 pod_ready.go:82] duration metric: took 6.166126ms for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.723879   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.883231   32390 request.go:632] Waited for 159.272181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m03
	I0831 22:30:56.883301   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m03
	I0831 22:30:56.883309   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.883319   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.883344   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.887103   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.083282   32390 request.go:632] Waited for 195.276518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:57.083372   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:57.083380   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.083397   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.083403   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.086479   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.087146   32390 pod_ready.go:93] pod "etcd-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:57.087164   32390 pod_ready.go:82] duration metric: took 363.277554ms for pod "etcd-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.087186   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.283721   32390 request.go:632] Waited for 196.468387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:30:57.283784   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:30:57.283790   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.283800   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.283806   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.287750   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.484111   32390 request.go:632] Waited for 195.347511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:57.484178   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:57.484185   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.484195   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.484205   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.487283   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.488101   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:57.488120   32390 pod_ready.go:82] duration metric: took 400.923504ms for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.488130   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.683294   32390 request.go:632] Waited for 195.094427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:30:57.683392   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:30:57.683402   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.683414   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.683422   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.687181   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.883511   32390 request.go:632] Waited for 195.381148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:57.883565   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:57.883570   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.883577   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.883580   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.886823   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.887372   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:57.887393   32390 pod_ready.go:82] duration metric: took 399.255799ms for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.887402   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.083472   32390 request.go:632] Waited for 195.991565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m03
	I0831 22:30:58.083530   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m03
	I0831 22:30:58.083536   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.083543   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.083549   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.087070   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.283176   32390 request.go:632] Waited for 195.281909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:58.283262   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:58.283274   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.283284   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.283291   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.286495   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.287188   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:58.287209   32390 pod_ready.go:82] duration metric: took 399.798926ms for pod "kube-apiserver-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.287221   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.483167   32390 request.go:632] Waited for 195.876889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:30:58.483242   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:30:58.483253   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.483266   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.483274   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.486774   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.684037   32390 request.go:632] Waited for 196.343131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:58.684102   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:58.684109   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.684117   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.684123   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.688025   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.688814   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:58.688837   32390 pod_ready.go:82] duration metric: took 401.604106ms for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.688853   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.883936   32390 request.go:632] Waited for 194.998979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:30:58.883998   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:30:58.884003   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.884010   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.884015   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.887937   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.084000   32390 request.go:632] Waited for 195.107632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:59.084053   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:59.084058   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.084065   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.084069   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.087199   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.087753   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:59.087770   32390 pod_ready.go:82] duration metric: took 398.906989ms for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.087780   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.283993   32390 request.go:632] Waited for 196.135453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m03
	I0831 22:30:59.284049   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m03
	I0831 22:30:59.284057   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.284066   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.284075   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.287461   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.483706   32390 request.go:632] Waited for 195.38146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.483782   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.483790   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.483801   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.483812   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.487107   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.487734   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:59.487753   32390 pod_ready.go:82] duration metric: took 399.967358ms for pod "kube-controller-manager-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.487763   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5c5hn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.683854   32390 request.go:632] Waited for 196.033052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5c5hn
	I0831 22:30:59.683954   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5c5hn
	I0831 22:30:59.683966   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.683976   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.683984   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.687475   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.883786   32390 request.go:632] Waited for 195.364934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.883843   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.883850   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.883861   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.883868   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.887645   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.888410   32390 pod_ready.go:93] pod "kube-proxy-5c5hn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:59.888433   32390 pod_ready.go:82] duration metric: took 400.662277ms for pod "kube-proxy-5c5hn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.888447   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.083487   32390 request.go:632] Waited for 194.947499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:31:00.083552   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:31:00.083559   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.083570   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.083581   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.087488   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:00.283769   32390 request.go:632] Waited for 195.336987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:00.283856   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:00.283864   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.283875   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.283884   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.293253   32390 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0831 22:31:00.293916   32390 pod_ready.go:93] pod "kube-proxy-dvpbk" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:00.293939   32390 pod_ready.go:82] duration metric: took 405.482498ms for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.293952   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.484062   32390 request.go:632] Waited for 190.030367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:31:00.484130   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:31:00.484140   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.484150   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.484158   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.487988   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:00.683177   32390 request.go:632] Waited for 194.320148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:00.683233   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:00.683239   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.683246   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.683250   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.687212   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:00.688205   32390 pod_ready.go:93] pod "kube-proxy-xrp64" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:00.688226   32390 pod_ready.go:82] duration metric: took 394.267834ms for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.688238   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.883223   32390 request.go:632] Waited for 194.896382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:31:00.883295   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:31:00.883302   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.883312   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.883321   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.886609   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.083395   32390 request.go:632] Waited for 195.863734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:01.083445   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:01.083451   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.083458   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.083462   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.087010   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.087580   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:01.087606   32390 pod_ready.go:82] duration metric: took 399.360395ms for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.087620   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.283642   32390 request.go:632] Waited for 195.940969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:31:01.283718   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:31:01.283727   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.283738   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.283747   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.287223   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.483305   32390 request.go:632] Waited for 195.28996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:01.483408   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:01.483417   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.483428   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.483436   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.487095   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.487609   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:01.487630   32390 pod_ready.go:82] duration metric: took 400.001504ms for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.487645   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.683649   32390 request.go:632] Waited for 195.915486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m03
	I0831 22:31:01.683706   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m03
	I0831 22:31:01.683712   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.683719   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.683724   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.687858   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:31:01.883107   32390 request.go:632] Waited for 194.303617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:31:01.883178   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:31:01.883184   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.883190   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.883195   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.887179   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.887839   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:01.887860   32390 pod_ready.go:82] duration metric: took 400.201925ms for pod "kube-scheduler-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.887874   32390 pod_ready.go:39] duration metric: took 5.200711661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:31:01.887888   32390 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:31:01.887944   32390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:31:01.904041   32390 api_server.go:72] duration metric: took 25.00927153s to wait for apiserver process to appear ...
	I0831 22:31:01.904069   32390 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:31:01.904091   32390 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0831 22:31:01.908570   32390 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0831 22:31:01.908655   32390 round_trippers.go:463] GET https://192.168.39.137:8443/version
	I0831 22:31:01.908666   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.908678   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.908682   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.909745   32390 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0831 22:31:01.909900   32390 api_server.go:141] control plane version: v1.31.0
	I0831 22:31:01.909922   32390 api_server.go:131] duration metric: took 5.846706ms to wait for apiserver health ...
	I0831 22:31:01.909932   32390 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:31:02.083280   32390 request.go:632] Waited for 173.27165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.083431   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.083443   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.083451   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.083456   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.090427   32390 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 22:31:02.097912   32390 system_pods.go:59] 24 kube-system pods found
	I0831 22:31:02.097946   32390 system_pods.go:61] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:31:02.097952   32390 system_pods.go:61] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:31:02.097956   32390 system_pods.go:61] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:31:02.097960   32390 system_pods.go:61] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:31:02.097963   32390 system_pods.go:61] "etcd-ha-957517-m03" [2633fae5-5ee4-4509-9465-f2b720100d7c] Running
	I0831 22:31:02.097966   32390 system_pods.go:61] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:31:02.097969   32390 system_pods.go:61] "kindnet-jqhdm" [44214ffc-79cc-4762-808b-74c5c5b4c923] Running
	I0831 22:31:02.097972   32390 system_pods.go:61] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:31:02.097976   32390 system_pods.go:61] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:31:02.097979   32390 system_pods.go:61] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:31:02.097982   32390 system_pods.go:61] "kube-apiserver-ha-957517-m03" [43f18bca-f02c-4ca0-8b75-97537a3bc8d0] Running
	I0831 22:31:02.097985   32390 system_pods.go:61] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:31:02.097990   32390 system_pods.go:61] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:31:02.097993   32390 system_pods.go:61] "kube-controller-manager-ha-957517-m03" [534c9743-745b-4a51-b5a9-0bf6b555e504] Running
	I0831 22:31:02.097996   32390 system_pods.go:61] "kube-proxy-5c5hn" [7c2a5860-28aa-4dc3-977f-17291f3e15fa] Running
	I0831 22:31:02.098001   32390 system_pods.go:61] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:31:02.098007   32390 system_pods.go:61] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:31:02.098010   32390 system_pods.go:61] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:31:02.098014   32390 system_pods.go:61] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:31:02.098019   32390 system_pods.go:61] "kube-scheduler-ha-957517-m03" [d2e0a9a9-5dbd-4e8c-9282-2c87d1821a86] Running
	I0831 22:31:02.098022   32390 system_pods.go:61] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:31:02.098028   32390 system_pods.go:61] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:31:02.098031   32390 system_pods.go:61] "kube-vip-ha-957517-m03" [42993b2f-bc3b-436c-9c0f-ba89cce80e72] Running
	I0831 22:31:02.098036   32390 system_pods.go:61] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:31:02.098042   32390 system_pods.go:74] duration metric: took 188.104776ms to wait for pod list to return data ...
	I0831 22:31:02.098053   32390 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:31:02.283477   32390 request.go:632] Waited for 185.355709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:31:02.283532   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:31:02.283537   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.283546   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.283552   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.287643   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:31:02.287766   32390 default_sa.go:45] found service account: "default"
	I0831 22:31:02.287780   32390 default_sa.go:55] duration metric: took 189.721492ms for default service account to be created ...
	I0831 22:31:02.287788   32390 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:31:02.484140   32390 request.go:632] Waited for 196.257862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.484205   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.484213   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.484224   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.484232   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.490496   32390 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 22:31:02.497868   32390 system_pods.go:86] 24 kube-system pods found
	I0831 22:31:02.497898   32390 system_pods.go:89] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:31:02.497904   32390 system_pods.go:89] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:31:02.497908   32390 system_pods.go:89] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:31:02.497912   32390 system_pods.go:89] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:31:02.497916   32390 system_pods.go:89] "etcd-ha-957517-m03" [2633fae5-5ee4-4509-9465-f2b720100d7c] Running
	I0831 22:31:02.497919   32390 system_pods.go:89] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:31:02.497922   32390 system_pods.go:89] "kindnet-jqhdm" [44214ffc-79cc-4762-808b-74c5c5b4c923] Running
	I0831 22:31:02.497926   32390 system_pods.go:89] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:31:02.497930   32390 system_pods.go:89] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:31:02.497934   32390 system_pods.go:89] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:31:02.497937   32390 system_pods.go:89] "kube-apiserver-ha-957517-m03" [43f18bca-f02c-4ca0-8b75-97537a3bc8d0] Running
	I0831 22:31:02.497941   32390 system_pods.go:89] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:31:02.497964   32390 system_pods.go:89] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:31:02.497971   32390 system_pods.go:89] "kube-controller-manager-ha-957517-m03" [534c9743-745b-4a51-b5a9-0bf6b555e504] Running
	I0831 22:31:02.497975   32390 system_pods.go:89] "kube-proxy-5c5hn" [7c2a5860-28aa-4dc3-977f-17291f3e15fa] Running
	I0831 22:31:02.497979   32390 system_pods.go:89] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:31:02.497983   32390 system_pods.go:89] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:31:02.497986   32390 system_pods.go:89] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:31:02.497991   32390 system_pods.go:89] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:31:02.497994   32390 system_pods.go:89] "kube-scheduler-ha-957517-m03" [d2e0a9a9-5dbd-4e8c-9282-2c87d1821a86] Running
	I0831 22:31:02.497997   32390 system_pods.go:89] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:31:02.498001   32390 system_pods.go:89] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:31:02.498005   32390 system_pods.go:89] "kube-vip-ha-957517-m03" [42993b2f-bc3b-436c-9c0f-ba89cce80e72] Running
	I0831 22:31:02.498008   32390 system_pods.go:89] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:31:02.498023   32390 system_pods.go:126] duration metric: took 210.22695ms to wait for k8s-apps to be running ...
	I0831 22:31:02.498029   32390 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:31:02.498072   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:31:02.518083   32390 system_svc.go:56] duration metric: took 20.043969ms WaitForService to wait for kubelet
	I0831 22:31:02.518117   32390 kubeadm.go:582] duration metric: took 25.623350196s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:31:02.518159   32390 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:31:02.683560   32390 request.go:632] Waited for 165.316062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes
	I0831 22:31:02.683638   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes
	I0831 22:31:02.683646   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.683657   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.683670   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.687591   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:02.688836   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:31:02.688858   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:31:02.688870   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:31:02.688874   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:31:02.688878   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:31:02.688881   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:31:02.688885   32390 node_conditions.go:105] duration metric: took 170.720648ms to run NodePressure ...
	I0831 22:31:02.688895   32390 start.go:241] waiting for startup goroutines ...
	I0831 22:31:02.688913   32390 start.go:255] writing updated cluster config ...
	I0831 22:31:02.689194   32390 ssh_runner.go:195] Run: rm -f paused
	I0831 22:31:02.739626   32390 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:31:02.741538   32390 out.go:177] * Done! kubectl is now configured to use "ha-957517" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.050146155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681050122370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44b959ba-de8e-4fec-bfba-0a69768cb872 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.050652977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44be7c5b-aaad-4d31-9c7a-80158b7ec2c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.050705974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44be7c5b-aaad-4d31-9c7a-80158b7ec2c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.050931715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44be7c5b-aaad-4d31-9c7a-80158b7ec2c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.095503272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=741bc674-00db-410b-8ec8-6f7b4056a8c7 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.095598261Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=741bc674-00db-410b-8ec8-6f7b4056a8c7 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.098061502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=394a6040-5dbb-4035-86d3-0551b38a6d93 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.098885579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681098853564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=394a6040-5dbb-4035-86d3-0551b38a6d93 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.099781946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbe8e5d0-a858-401c-a56d-3d412f081eeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.099852851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbe8e5d0-a858-401c-a56d-3d412f081eeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.100146292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbe8e5d0-a858-401c-a56d-3d412f081eeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.145506545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41e07339-f2a6-41a0-a30f-838c95b8b77b name=/runtime.v1.RuntimeService/Version
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.145601893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41e07339-f2a6-41a0-a30f-838c95b8b77b name=/runtime.v1.RuntimeService/Version
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.147184104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b77f4f8-1ab7-4bf8-9be9-9889fab5e38a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.147961626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681147931778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b77f4f8-1ab7-4bf8-9be9-9889fab5e38a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.149151526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b1b3e30-1e44-44d6-8e1c-5cc28f81abc8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.149225043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b1b3e30-1e44-44d6-8e1c-5cc28f81abc8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.149816311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b1b3e30-1e44-44d6-8e1c-5cc28f81abc8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.195758522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ca3d31f-4e70-4c09-859e-bc41c9c2aed7 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.195856382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ca3d31f-4e70-4c09-859e-bc41c9c2aed7 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.197097882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5b6d86b-5a3c-4e5e-9b3f-57f9912b6b7b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.198082723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681198007905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5b6d86b-5a3c-4e5e-9b3f-57f9912b6b7b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.199030929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=281f8ee4-ac2c-4c13-a1f6-696e1b72e27a name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.199099858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=281f8ee4-ac2c-4c13-a1f6-696e1b72e27a name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:34:41 ha-957517 crio[660]: time="2024-08-31 22:34:41.199535099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=281f8ee4-ac2c-4c13-a1f6-696e1b72e27a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc9ea3c2c4cc4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   9f283cd54a11f       busybox-7dff88458-zdnwd
	4a85b32a796fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   6e863e5cd9b9c       coredns-6f6b679f8f-k7rsc
	0cfba67fe9abb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   298283fc5c9c2       coredns-6f6b679f8f-pc7gn
	c7f58140d0328       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   f447d0de4324d       storage-provisioner
	35cc0bc2b6243       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   37828bdcd38b5       kindnet-tkvsc
	b1a123f41fac1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   99877abcdf5a7       kube-proxy-xrp64
	883967c8cb807       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   4b473227ca455       kube-vip-ha-957517
	e1c6a4e36ddb2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   144e67a21ecaa       kube-scheduler-ha-957517
	f3ae732e5626c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   960ae9b08a3ee       etcd-ha-957517
	179da26791305       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   53f202af525dd       kube-apiserver-ha-957517
	f4284e308e02a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   54c5069584051       kube-controller-manager-ha-957517
	
	
	==> coredns [0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49314 - 30950 "HINFO IN 2244475907911654407.2267664286832635684. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013631152s
	[INFO] 10.244.2.2:59391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000367295s
	[INFO] 10.244.2.2:45655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001322338s
	[INFO] 10.244.2.2:45804 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001542866s
	[INFO] 10.244.0.4:36544 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002043312s
	[INFO] 10.244.1.2:34999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003609s
	[INFO] 10.244.1.2:45741 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017294944s
	[INFO] 10.244.1.2:57093 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224681s
	[INFO] 10.244.2.2:49538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000358252s
	[INFO] 10.244.2.2:53732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00185161s
	[INFO] 10.244.2.2:41165 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231402s
	[INFO] 10.244.2.2:60230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118116s
	[INFO] 10.244.2.2:42062 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271609s
	[INFO] 10.244.0.4:49034 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067938s
	[INFO] 10.244.0.4:36002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196492s
	[INFO] 10.244.1.2:54186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124969s
	[INFO] 10.244.1.2:47709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000506218s
	[INFO] 10.244.0.4:54205 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087475s
	[INFO] 10.244.0.4:48802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055159s
	[INFO] 10.244.1.2:46825 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148852s
	[INFO] 10.244.2.2:60523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183145s
	[INFO] 10.244.0.4:53842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116944s
	[INFO] 10.244.0.4:56291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000217808s
	[INFO] 10.244.0.4:53612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00028657s
	
	
	==> coredns [4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e] <==
	[INFO] 10.244.1.2:36845 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203981s
	[INFO] 10.244.2.2:34667 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133457s
	[INFO] 10.244.2.2:42430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001331909s
	[INFO] 10.244.2.2:33158 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151531s
	[INFO] 10.244.0.4:34378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135148s
	[INFO] 10.244.0.4:43334 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001723638s
	[INFO] 10.244.0.4:54010 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080627s
	[INFO] 10.244.0.4:47700 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424459s
	[INFO] 10.244.0.4:50346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070487s
	[INFO] 10.244.0.4:43522 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051146s
	[INFO] 10.244.1.2:60157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099584s
	[INFO] 10.244.1.2:48809 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104515s
	[INFO] 10.244.2.2:37042 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132626s
	[INFO] 10.244.2.2:38343 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117546s
	[INFO] 10.244.2.2:53716 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092804s
	[INFO] 10.244.2.2:59881 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068808s
	[INFO] 10.244.0.4:40431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093051s
	[INFO] 10.244.0.4:39552 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087951s
	[INFO] 10.244.1.2:59301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113713s
	[INFO] 10.244.1.2:40299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210744s
	[INFO] 10.244.1.2:54276 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210063s
	[INFO] 10.244.2.2:34222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307653s
	[INFO] 10.244.2.2:42028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089936s
	[INFO] 10.244.2.2:47927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066426s
	[INFO] 10.244.0.4:39601 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085891s
	
	
	==> describe nodes <==
	Name:               ha-957517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_28_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:34:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-957517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 438078db78ee43a0bfe8057c915827a8
	  System UUID:                438078db-78ee-43a0-bfe8-057c915827a8
	  Boot ID:                    e88a2dfb-1351-416c-9b78-5a255e623f1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zdnwd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-6f6b679f8f-k7rsc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 coredns-6f6b679f8f-pc7gn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 etcd-ha-957517                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-tkvsc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-957517             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-957517    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-xrp64                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-957517             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-vip-ha-957517                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  Starting                 6m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m20s (x2 over 6m20s)  kubelet          Node ha-957517 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x2 over 6m20s)  kubelet          Node ha-957517 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x2 over 6m20s)  kubelet          Node ha-957517 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal  NodeReady                5m59s (x2 over 5m59s)  kubelet          Node ha-957517 status is now: NodeReady
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	
	
	Name:               ha-957517-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_29_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:29:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:32:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-957517-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a152f180715c42228f54c353a9e8c1bb
	  System UUID:                a152f180-715c-4222-8f54-c353a9e8c1bb
	  Boot ID:                    475f4e70-e580-4071-92be-a87256c6caa3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cwtrb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-957517-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-bmxh2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-957517-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-ha-957517-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-proxy-dvpbk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-957517-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-vip-ha-957517-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-957517-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-957517-m02 status is now: NodeNotReady
	
	
	Name:               ha-957517-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_30_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:30:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:34:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    ha-957517-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 886d8b963cd94078ae7cf268a2d07053
	  System UUID:                886d8b96-3cd9-4078-ae7c-f268a2d07053
	  Boot ID:                    cf8e9f17-005d-4cb8-af63-0ff51a14233f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fkvvp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-957517-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-jqhdm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-957517-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-957517-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-5c5hn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-957517-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-957517-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-957517-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-957517-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-957517-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	
	
	Name:               ha-957517-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_31_41_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:31:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:34:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:31:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:31:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:31:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:32:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-957517-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08b180ad339e4d19acb3ea0e7328dc00
	  System UUID:                08b180ad-339e-4d19-acb3-ea0e7328dc00
	  Boot ID:                    eb027e2a-5c22-4721-9b4b-8b9696ccec09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2t9r8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-6f6xd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-957517-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeReady                2m40s                kubelet          Node ha-957517-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug31 22:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050272] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040028] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.780554] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.478094] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.617326] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug31 22:28] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.064763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057170] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.193531] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.118523] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.278233] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.003192] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.620544] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058441] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.958169] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083987] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.815006] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.616164] kauditd_printk_skb: 38 callbacks suppressed
	[Aug31 22:29] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18] <==
	{"level":"warn","ts":"2024-08-31T22:34:41.235078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.334955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.435125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.484697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.535514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.542674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.553114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.562701Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.576763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.578464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.587581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.598630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.604827Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.610820Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.614056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.616990Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.625503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.632759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.634853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.639120Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.643046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.646743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.651305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.663085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:34:41.669515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:34:41 up 6 min,  0 users,  load average: 0.23, 0.33, 0.18
	Linux ha-957517 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23] <==
	I0831 22:34:01.965274       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:34:11.963880       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:34:11.964089       1 main.go:299] handling current node
	I0831 22:34:11.964145       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:34:11.964164       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:34:11.964357       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:34:11.964461       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:34:11.964544       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:34:11.964565       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:34:21.972761       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:34:21.972826       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:34:21.973068       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:34:21.973094       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:34:21.973171       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:34:21.973196       1 main.go:299] handling current node
	I0831 22:34:21.973225       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:34:21.973235       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:34:31.964236       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:34:31.964263       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:34:31.964445       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:34:31.964471       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:34:31.964531       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:34:31.964553       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:34:31.964601       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:34:31.964622       1 main.go:299] handling current node
	
	
	==> kube-apiserver [179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d] <==
	I0831 22:28:20.298828       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0831 22:28:20.306625       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137]
	I0831 22:28:20.308017       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 22:28:20.312249       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0831 22:28:20.586447       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0831 22:28:21.438940       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0831 22:28:21.452305       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0831 22:28:21.464936       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0831 22:28:25.687561       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0831 22:28:25.936439       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0831 22:31:09.632303       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38730: use of closed network connection
	E0831 22:31:09.821225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38754: use of closed network connection
	E0831 22:31:10.009673       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38772: use of closed network connection
	E0831 22:31:10.224354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38790: use of closed network connection
	E0831 22:31:10.414899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38816: use of closed network connection
	E0831 22:31:10.594803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38846: use of closed network connection
	E0831 22:31:10.784357       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38854: use of closed network connection
	E0831 22:31:10.966814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38880: use of closed network connection
	E0831 22:31:11.165809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38912: use of closed network connection
	E0831 22:31:11.454018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38918: use of closed network connection
	E0831 22:31:11.622063       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38940: use of closed network connection
	E0831 22:31:11.799191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38972: use of closed network connection
	E0831 22:31:11.972716       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38988: use of closed network connection
	E0831 22:31:12.149268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39010: use of closed network connection
	E0831 22:31:12.339572       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39026: use of closed network connection
	
	
	==> kube-controller-manager [f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899] <==
	I0831 22:31:40.644727       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-957517-m04" podCIDRs=["10.244.3.0/24"]
	I0831 22:31:40.644789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:40.644834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:40.656731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:40.914999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:41.092603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:41.465269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:42.269077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:42.347015       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:45.360870       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:45.361356       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-957517-m04"
	I0831 22:31:45.385156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:51.026559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:01.064913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:01.065464       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-957517-m04"
	I0831 22:32:01.088211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:02.290136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:11.274284       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:52.314276       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:32:52.314736       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-957517-m04"
	I0831 22:32:52.341104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:32:52.378963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.309902ms"
	I0831 22:32:52.379050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.666µs"
	I0831 22:32:55.459662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:32:57.549701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	
	
	==> kube-proxy [b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:28:27.350238       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:28:27.365865       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E0831 22:28:27.366008       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:28:27.407549       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:28:27.407635       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:28:27.407682       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:28:27.410268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:28:27.410698       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:28:27.410744       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:28:27.411896       1 config.go:197] "Starting service config controller"
	I0831 22:28:27.412108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:28:27.412157       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:28:27.412174       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:28:27.412762       1 config.go:326] "Starting node config controller"
	I0831 22:28:27.415567       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:28:27.512346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:28:27.512467       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:28:27.515752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3] <==
	W0831 22:28:19.960560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:28:19.960646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:28:19.981183       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:28:19.981234       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0831 22:28:22.321943       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0831 22:30:33.523418       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jqhdm\": pod kindnet-jqhdm is already assigned to node \"ha-957517-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-jqhdm" node="ha-957517-m03"
	E0831 22:30:33.524317       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 44214ffc-79cc-4762-808b-74c5c5b4c923(kube-system/kindnet-jqhdm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jqhdm"
	E0831 22:30:33.527453       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jqhdm\": pod kindnet-jqhdm is already assigned to node \"ha-957517-m03\"" pod="kube-system/kindnet-jqhdm"
	I0831 22:30:33.527536       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jqhdm" node="ha-957517-m03"
	E0831 22:31:03.668045       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkvvp\": pod busybox-7dff88458-fkvvp is already assigned to node \"ha-957517-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fkvvp" node="ha-957517-m03"
	E0831 22:31:03.669556       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8887e4b3-2a39-4b37-a077-d7deaf9a2772(default/busybox-7dff88458-fkvvp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fkvvp"
	E0831 22:31:03.669647       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkvvp\": pod busybox-7dff88458-fkvvp is already assigned to node \"ha-957517-m03\"" pod="default/busybox-7dff88458-fkvvp"
	I0831 22:31:03.669693       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fkvvp" node="ha-957517-m03"
	E0831 22:31:40.718597       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xmftg\": pod kube-proxy-xmftg is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xmftg" node="ha-957517-m04"
	E0831 22:31:40.718699       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xmftg\": pod kube-proxy-xmftg is already assigned to node \"ha-957517-m04\"" pod="kube-system/kube-proxy-xmftg"
	E0831 22:31:40.725285       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-srxdg\": pod kube-proxy-srxdg is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-srxdg" node="ha-957517-m04"
	E0831 22:31:40.725498       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-srxdg\": pod kube-proxy-srxdg is already assigned to node \"ha-957517-m04\"" pod="kube-system/kube-proxy-srxdg"
	E0831 22:31:40.726133       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2t9r8\": pod kindnet-2t9r8 is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2t9r8" node="ha-957517-m04"
	E0831 22:31:40.726210       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6de5171d-ad2f-4f18-9d99-a6fc3709304c(kube-system/kindnet-2t9r8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2t9r8"
	E0831 22:31:40.726228       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2t9r8\": pod kindnet-2t9r8 is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-2t9r8"
	I0831 22:31:40.726253       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2t9r8" node="ha-957517-m04"
	E0831 22:31:40.731781       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:31:40.731866       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3457f0a0-fd3b-4e40-819f-9d57c29036e6(kube-system/kindnet-mljxh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mljxh"
	E0831 22:31:40.731884       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-mljxh"
	I0831 22:31:40.731900       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	
	
	==> kubelet <==
	Aug 31 22:33:21 ha-957517 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 22:33:21 ha-957517 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 22:33:21 ha-957517 kubelet[1303]: E0831 22:33:21.502323    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143601502052749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:33:21 ha-957517 kubelet[1303]: E0831 22:33:21.502354    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143601502052749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:33:31 ha-957517 kubelet[1303]: E0831 22:33:31.504172    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143611503890943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:33:31 ha-957517 kubelet[1303]: E0831 22:33:31.504213    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143611503890943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:33:41 ha-957517 kubelet[1303]: E0831 22:33:41.506406    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143621506059787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:33:41 ha-957517 kubelet[1303]: E0831 22:33:41.506455    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143621506059787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:33:51 ha-957517 kubelet[1303]: E0831 22:33:51.513715    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143631508612982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:33:51 ha-957517 kubelet[1303]: E0831 22:33:51.513810    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143631508612982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:01 ha-957517 kubelet[1303]: E0831 22:34:01.517010    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143641516257600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:01 ha-957517 kubelet[1303]: E0831 22:34:01.517363    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143641516257600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:11 ha-957517 kubelet[1303]: E0831 22:34:11.520434    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143651519913314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:11 ha-957517 kubelet[1303]: E0831 22:34:11.520504    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143651519913314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:21 ha-957517 kubelet[1303]: E0831 22:34:21.417962    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 22:34:21 ha-957517 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 22:34:21 ha-957517 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 22:34:21 ha-957517 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 22:34:21 ha-957517 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 22:34:21 ha-957517 kubelet[1303]: E0831 22:34:21.523275    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143661522850588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:21 ha-957517 kubelet[1303]: E0831 22:34:21.523312    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143661522850588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:31 ha-957517 kubelet[1303]: E0831 22:34:31.526580    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143671525063817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:31 ha-957517 kubelet[1303]: E0831 22:34:31.527025    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143671525063817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:41 ha-957517 kubelet[1303]: E0831 22:34:41.529063    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681528655792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:41 ha-957517 kubelet[1303]: E0831 22:34:41.529093    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681528655792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-957517 -n ha-957517
helpers_test.go:262: (dbg) Run:  kubectl --context ha-957517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:286: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 3 (3.196730921s)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:34:46.212838   37209 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:34:46.212942   37209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:46.212949   37209 out.go:358] Setting ErrFile to fd 2...
	I0831 22:34:46.212954   37209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:46.213211   37209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:34:46.213462   37209 out.go:352] Setting JSON to false
	I0831 22:34:46.213493   37209 mustload.go:65] Loading cluster: ha-957517
	I0831 22:34:46.213538   37209 notify.go:220] Checking for updates...
	I0831 22:34:46.214014   37209 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:34:46.214035   37209 status.go:255] checking status of ha-957517 ...
	I0831 22:34:46.214439   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:46.214476   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:46.235603   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
	I0831 22:34:46.236087   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:46.236633   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:46.236656   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:46.237102   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:46.237335   37209 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:34:46.238972   37209 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:34:46.238990   37209 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:46.239418   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:46.239467   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:46.254771   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0831 22:34:46.255152   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:46.255600   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:46.255620   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:46.255933   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:46.256147   37209 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:34:46.258439   37209 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:46.258867   37209 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:46.258885   37209 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:46.259050   37209 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:46.259461   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:46.259501   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:46.273679   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I0831 22:34:46.274014   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:46.274466   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:46.274487   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:46.274757   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:46.274936   37209 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:34:46.275121   37209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:46.275147   37209 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:34:46.277714   37209 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:46.278073   37209 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:46.278100   37209 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:46.278257   37209 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:34:46.278448   37209 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:34:46.278623   37209 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:34:46.278743   37209 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:34:46.354748   37209 ssh_runner.go:195] Run: systemctl --version
	I0831 22:34:46.360738   37209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:46.376871   37209 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:34:46.376903   37209 api_server.go:166] Checking apiserver status ...
	I0831 22:34:46.376943   37209 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:34:46.391061   37209 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:34:46.401265   37209 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:34:46.401325   37209 ssh_runner.go:195] Run: ls
	I0831 22:34:46.406156   37209 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:34:46.410312   37209 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:34:46.410334   37209 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:34:46.410343   37209 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:34:46.410359   37209 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:34:46.410657   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:46.410696   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:46.425716   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I0831 22:34:46.426098   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:46.426514   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:46.426534   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:46.426844   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:46.427029   37209 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:34:46.428652   37209 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:34:46.428668   37209 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:46.428973   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:46.429012   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:46.443335   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0831 22:34:46.443785   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:46.444266   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:46.444283   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:46.444642   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:46.444830   37209 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:34:46.447731   37209 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:46.448146   37209 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:46.448179   37209 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:46.448288   37209 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:46.448640   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:46.448675   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:46.462924   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0831 22:34:46.463265   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:46.463707   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:46.463726   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:46.464012   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:46.464169   37209 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:34:46.464307   37209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:46.464327   37209 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:34:46.466839   37209 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:46.467278   37209 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:46.467298   37209 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:46.467531   37209 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:34:46.467685   37209 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:34:46.467824   37209 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:34:46.467946   37209 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	W0831 22:34:49.031634   37209 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:34:49.031721   37209 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0831 22:34:49.031737   37209 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:49.031751   37209 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 22:34:49.031769   37209 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:49.031781   37209 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:34:49.032101   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:49.032138   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:49.047048   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I0831 22:34:49.047474   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:49.047964   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:49.047991   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:49.048288   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:49.048479   37209 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:34:49.049834   37209 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:34:49.049852   37209 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:34:49.050146   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:49.050206   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:49.065025   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0831 22:34:49.065442   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:49.065919   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:49.065940   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:49.066190   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:49.066356   37209 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:34:49.069322   37209 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:49.069702   37209 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:34:49.069726   37209 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:49.069868   37209 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:34:49.070183   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:49.070232   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:49.084722   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0831 22:34:49.085115   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:49.085571   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:49.085592   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:49.085865   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:49.086033   37209 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:34:49.086217   37209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:49.086234   37209 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:34:49.088798   37209 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:49.089167   37209 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:34:49.089191   37209 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:49.089351   37209 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:34:49.089505   37209 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:34:49.089664   37209 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:34:49.089795   37209 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:34:49.167767   37209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:49.182330   37209 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:34:49.182353   37209 api_server.go:166] Checking apiserver status ...
	I0831 22:34:49.182378   37209 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:34:49.196238   37209 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:34:49.206320   37209 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:34:49.206361   37209 ssh_runner.go:195] Run: ls
	I0831 22:34:49.211016   37209 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:34:49.215124   37209 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:34:49.215147   37209 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:34:49.215159   37209 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:34:49.215177   37209 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:34:49.215535   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:49.215569   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:49.230193   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
	I0831 22:34:49.230587   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:49.231014   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:49.231033   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:49.231356   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:49.231546   37209 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:34:49.233000   37209 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:34:49.233015   37209 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:34:49.233288   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:49.233317   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:49.248324   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45087
	I0831 22:34:49.248726   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:49.249220   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:49.249238   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:49.249594   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:49.249784   37209 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:34:49.252755   37209 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:49.253155   37209 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:34:49.253186   37209 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:49.253306   37209 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:34:49.253627   37209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:49.253671   37209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:49.269455   37209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0831 22:34:49.269853   37209 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:49.270320   37209 main.go:141] libmachine: Using API Version  1
	I0831 22:34:49.270345   37209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:49.270660   37209 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:49.270862   37209 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:34:49.271047   37209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:49.271095   37209 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:34:49.273637   37209 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:49.274042   37209 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:34:49.274058   37209 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:49.274170   37209 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:34:49.274367   37209 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:34:49.274506   37209 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:34:49.274637   37209 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:34:49.351394   37209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:49.366172   37209 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 3 (5.321306429s)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:34:50.236310   37294 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:34:50.236540   37294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:50.236549   37294 out.go:358] Setting ErrFile to fd 2...
	I0831 22:34:50.236556   37294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:50.236753   37294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:34:50.236927   37294 out.go:352] Setting JSON to false
	I0831 22:34:50.236951   37294 mustload.go:65] Loading cluster: ha-957517
	I0831 22:34:50.237045   37294 notify.go:220] Checking for updates...
	I0831 22:34:50.237368   37294 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:34:50.237383   37294 status.go:255] checking status of ha-957517 ...
	I0831 22:34:50.237842   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:50.237904   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:50.255596   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0831 22:34:50.255964   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:50.256501   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:50.256520   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:50.256818   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:50.257005   37294 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:34:50.258559   37294 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:34:50.258572   37294 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:50.258826   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:50.258868   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:50.273179   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0831 22:34:50.273549   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:50.273932   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:50.273952   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:50.274240   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:50.274404   37294 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:34:50.276973   37294 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:50.277362   37294 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:50.277391   37294 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:50.277528   37294 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:50.277802   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:50.277833   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:50.292529   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0831 22:34:50.292908   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:50.293338   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:50.293359   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:50.293609   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:50.293789   37294 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:34:50.293955   37294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:50.293976   37294 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:34:50.296675   37294 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:50.297127   37294 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:50.297152   37294 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:50.297296   37294 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:34:50.297452   37294 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:34:50.297688   37294 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:34:50.297827   37294 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:34:50.383011   37294 ssh_runner.go:195] Run: systemctl --version
	I0831 22:34:50.389866   37294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:50.405246   37294 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:34:50.405272   37294 api_server.go:166] Checking apiserver status ...
	I0831 22:34:50.405307   37294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:34:50.422228   37294 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:34:50.436232   37294 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:34:50.436282   37294 ssh_runner.go:195] Run: ls
	I0831 22:34:50.440409   37294 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:34:50.446224   37294 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:34:50.446245   37294 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:34:50.446256   37294 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:34:50.446271   37294 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:34:50.446559   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:50.446591   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:50.461783   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0831 22:34:50.462161   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:50.462629   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:50.462650   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:50.462934   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:50.463114   37294 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:34:50.464582   37294 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:34:50.464599   37294 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:50.464885   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:50.464945   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:50.478874   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0831 22:34:50.479197   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:50.479611   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:50.479629   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:50.479907   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:50.480087   37294 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:34:50.482625   37294 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:50.483015   37294 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:50.483037   37294 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:50.483167   37294 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:50.483534   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:50.483567   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:50.499257   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0831 22:34:50.499734   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:50.500191   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:50.500210   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:50.500496   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:50.500625   37294 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:34:50.500766   37294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:50.500786   37294 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:34:50.503475   37294 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:50.503854   37294 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:50.503889   37294 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:50.504008   37294 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:34:50.504232   37294 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:34:50.504392   37294 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:34:50.504549   37294 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	W0831 22:34:52.099661   37294 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:52.099737   37294 retry.go:31] will retry after 176.311535ms: dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:34:55.171577   37294 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:34:55.171692   37294 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0831 22:34:55.171724   37294 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:55.171737   37294 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 22:34:55.171769   37294 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:55.171783   37294 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:34:55.172104   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:55.172147   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:55.186892   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36353
	I0831 22:34:55.187274   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:55.187736   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:55.187757   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:55.188106   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:55.188297   37294 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:34:55.189832   37294 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:34:55.189845   37294 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:34:55.190187   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:55.190224   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:55.205552   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0831 22:34:55.205900   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:55.206314   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:55.206334   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:55.206611   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:55.206804   37294 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:34:55.209659   37294 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:55.210081   37294 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:34:55.210109   37294 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:55.210235   37294 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:34:55.210632   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:55.210671   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:55.225512   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0831 22:34:55.225876   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:55.226332   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:55.226353   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:55.226668   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:55.226865   37294 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:34:55.227052   37294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:55.227069   37294 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:34:55.229835   37294 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:55.230250   37294 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:34:55.230279   37294 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:34:55.230381   37294 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:34:55.230547   37294 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:34:55.230698   37294 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:34:55.230810   37294 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:34:55.306940   37294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:55.322863   37294 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:34:55.322887   37294 api_server.go:166] Checking apiserver status ...
	I0831 22:34:55.322916   37294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:34:55.336917   37294 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:34:55.351668   37294 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:34:55.351725   37294 ssh_runner.go:195] Run: ls
	I0831 22:34:55.356490   37294 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:34:55.362726   37294 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:34:55.362756   37294 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:34:55.362769   37294 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:34:55.362785   37294 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:34:55.363100   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:55.363146   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:55.377977   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0831 22:34:55.378372   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:55.378786   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:55.378806   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:55.379072   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:55.379218   37294 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:34:55.380617   37294 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:34:55.380634   37294 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:34:55.381008   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:55.381048   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:55.396052   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0831 22:34:55.396490   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:55.396973   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:55.396995   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:55.397297   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:55.397484   37294 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:34:55.400286   37294 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:55.400728   37294 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:34:55.400753   37294 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:55.400875   37294 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:34:55.401189   37294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:55.401222   37294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:55.415836   37294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33909
	I0831 22:34:55.416254   37294 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:55.416697   37294 main.go:141] libmachine: Using API Version  1
	I0831 22:34:55.416720   37294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:55.417045   37294 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:55.417254   37294 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:34:55.417468   37294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:55.417494   37294 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:34:55.420063   37294 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:55.420479   37294 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:34:55.420498   37294 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:34:55.420659   37294 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:34:55.420803   37294 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:34:55.420930   37294 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:34:55.421056   37294 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:34:55.500211   37294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:55.514807   37294 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
E0831 22:34:59.874637   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 3 (5.279525552s)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:34:56.417218   37410 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:34:56.417480   37410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:56.417490   37410 out.go:358] Setting ErrFile to fd 2...
	I0831 22:34:56.417495   37410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:56.417662   37410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:34:56.417829   37410 out.go:352] Setting JSON to false
	I0831 22:34:56.417856   37410 mustload.go:65] Loading cluster: ha-957517
	I0831 22:34:56.417959   37410 notify.go:220] Checking for updates...
	I0831 22:34:56.418249   37410 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:34:56.418266   37410 status.go:255] checking status of ha-957517 ...
	I0831 22:34:56.418643   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:56.418714   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:56.437123   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0831 22:34:56.437471   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:56.437997   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:34:56.438021   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:56.438441   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:56.438640   37410 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:34:56.440039   37410 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:34:56.440053   37410 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:56.440337   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:56.440383   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:56.454659   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0831 22:34:56.454993   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:56.455384   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:34:56.455411   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:56.455688   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:56.455856   37410 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:34:56.458405   37410 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:56.458779   37410 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:56.458810   37410 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:56.458886   37410 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:34:56.459165   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:56.459197   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:56.473886   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0831 22:34:56.474255   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:56.474653   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:34:56.474671   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:56.474905   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:56.475039   37410 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:34:56.475188   37410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:56.475206   37410 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:34:56.477422   37410 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:56.477896   37410 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:34:56.477926   37410 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:34:56.478025   37410 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:34:56.478177   37410 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:34:56.478330   37410 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:34:56.478457   37410 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:34:56.555050   37410 ssh_runner.go:195] Run: systemctl --version
	I0831 22:34:56.561355   37410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:34:56.577462   37410 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:34:56.577488   37410 api_server.go:166] Checking apiserver status ...
	I0831 22:34:56.577517   37410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:34:56.590976   37410 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:34:56.601179   37410 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:34:56.601221   37410 ssh_runner.go:195] Run: ls
	I0831 22:34:56.605112   37410 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:34:56.610747   37410 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:34:56.610764   37410 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:34:56.610773   37410 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:34:56.610788   37410 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:34:56.611084   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:56.611123   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:56.625689   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0831 22:34:56.626062   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:56.626453   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:34:56.626473   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:56.626731   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:56.626917   37410 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:34:56.628458   37410 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:34:56.628475   37410 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:56.628732   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:56.628765   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:56.642645   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0831 22:34:56.643033   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:56.643531   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:34:56.643550   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:56.643823   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:56.643983   37410 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:34:56.646238   37410 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:56.646617   37410 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:56.646642   37410 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:56.646783   37410 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:34:56.647083   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:34:56.647114   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:34:56.661078   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0831 22:34:56.661511   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:34:56.661939   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:34:56.661954   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:34:56.662283   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:34:56.662469   37410 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:34:56.662641   37410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:34:56.662661   37410 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:34:56.664980   37410 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:56.665335   37410 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:34:56.665365   37410 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:34:56.665481   37410 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:34:56.665620   37410 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:34:56.665766   37410 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:34:56.665873   37410 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	W0831 22:34:58.243614   37410 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:34:58.243685   37410 retry.go:31] will retry after 270.80813ms: dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:35:01.315596   37410 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:35:01.315669   37410 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0831 22:35:01.315687   37410 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:35:01.315699   37410 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 22:35:01.315735   37410 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:35:01.315742   37410 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:35:01.316051   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:01.316107   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:01.331003   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I0831 22:35:01.331437   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:01.331923   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:35:01.331948   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:01.332234   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:01.332445   37410 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:35:01.334181   37410 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:35:01.334198   37410 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:01.334506   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:01.334539   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:01.349807   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0831 22:35:01.350210   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:01.350683   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:35:01.350708   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:01.350985   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:01.351198   37410 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:35:01.354068   37410 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:01.354536   37410 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:01.354564   37410 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:01.354735   37410 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:01.355214   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:01.355273   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:01.370529   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I0831 22:35:01.370967   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:01.371368   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:35:01.371398   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:01.371699   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:01.371858   37410 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:35:01.372055   37410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:01.372075   37410 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:35:01.374986   37410 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:01.375422   37410 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:01.375454   37410 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:01.375586   37410 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:35:01.375749   37410 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:35:01.375934   37410 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:35:01.376107   37410 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:35:01.454994   37410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:01.469905   37410 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:01.469935   37410 api_server.go:166] Checking apiserver status ...
	I0831 22:35:01.469982   37410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:01.483626   37410 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:35:01.493165   37410 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:01.493217   37410 ssh_runner.go:195] Run: ls
	I0831 22:35:01.497992   37410 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:01.502531   37410 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:01.502559   37410 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:35:01.502571   37410 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:01.502587   37410 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:35:01.503019   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:01.503062   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:01.518348   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0831 22:35:01.518809   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:01.519287   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:35:01.519310   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:01.519630   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:01.519856   37410 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:01.521475   37410 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:35:01.521489   37410 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:01.521761   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:01.521795   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:01.536917   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0831 22:35:01.537433   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:01.537988   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:35:01.538013   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:01.538358   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:01.538567   37410 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:35:01.542039   37410 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:01.542600   37410 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:01.542629   37410 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:01.542869   37410 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:01.543280   37410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:01.543346   37410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:01.558791   37410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I0831 22:35:01.559288   37410 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:01.559855   37410 main.go:141] libmachine: Using API Version  1
	I0831 22:35:01.559885   37410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:01.560224   37410 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:01.560417   37410 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:35:01.560674   37410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:01.560712   37410 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:35:01.563626   37410 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:01.564094   37410 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:01.564130   37410 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:01.564251   37410 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:35:01.564419   37410 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:35:01.564536   37410 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:35:01.564670   37410 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:35:01.642743   37410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:01.657349   37410 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 3 (3.991550425s)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:35:04.005742   37511 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:35:04.005997   37511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:04.006007   37511 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:04.006011   37511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:04.006193   37511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:35:04.006347   37511 out.go:352] Setting JSON to false
	I0831 22:35:04.006372   37511 mustload.go:65] Loading cluster: ha-957517
	I0831 22:35:04.006503   37511 notify.go:220] Checking for updates...
	I0831 22:35:04.006863   37511 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:35:04.006882   37511 status.go:255] checking status of ha-957517 ...
	I0831 22:35:04.007393   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:04.007466   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:04.025575   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I0831 22:35:04.026040   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:04.026686   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:04.026752   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:04.027152   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:04.027375   37511 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:35:04.029134   37511 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:35:04.029148   37511 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:04.029419   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:04.029447   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:04.044412   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0831 22:35:04.044841   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:04.045273   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:04.045294   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:04.045649   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:04.045837   37511 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:35:04.048642   37511 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:04.049080   37511 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:04.049116   37511 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:04.049277   37511 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:04.049617   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:04.049657   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:04.065479   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0831 22:35:04.065885   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:04.066342   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:04.066364   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:04.066666   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:04.066873   37511 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:35:04.067050   37511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:04.067083   37511 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:35:04.070060   37511 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:04.070585   37511 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:04.070602   37511 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:04.070735   37511 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:35:04.070887   37511 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:35:04.071082   37511 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:35:04.071250   37511 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:35:04.151665   37511 ssh_runner.go:195] Run: systemctl --version
	I0831 22:35:04.157820   37511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:04.173938   37511 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:04.173989   37511 api_server.go:166] Checking apiserver status ...
	I0831 22:35:04.174027   37511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:04.187967   37511 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:35:04.197564   37511 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:04.197619   37511 ssh_runner.go:195] Run: ls
	I0831 22:35:04.201930   37511 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:04.206339   37511 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:04.206365   37511 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:35:04.206380   37511 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:04.206403   37511 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:35:04.206808   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:04.206859   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:04.222357   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I0831 22:35:04.222815   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:04.223306   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:04.223340   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:04.223694   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:04.223889   37511 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:35:04.225558   37511 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:35:04.225576   37511 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:35:04.225979   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:04.226028   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:04.241059   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0831 22:35:04.241498   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:04.242019   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:04.242038   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:04.242378   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:04.242602   37511 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:35:04.245661   37511 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:04.246151   37511 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:35:04.246189   37511 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:04.246383   37511 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:35:04.246679   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:04.246716   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:04.262321   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I0831 22:35:04.262791   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:04.263395   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:04.263423   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:04.263746   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:04.263953   37511 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:35:04.264145   37511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:04.264172   37511 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:35:04.266813   37511 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:04.267367   37511 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:35:04.267390   37511 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:04.267609   37511 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:35:04.267848   37511 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:35:04.268018   37511 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:35:04.268167   37511 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	W0831 22:35:04.387572   37511 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:35:04.387631   37511 retry.go:31] will retry after 156.027545ms: dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:35:07.619589   37511 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:35:07.619674   37511 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0831 22:35:07.619692   37511 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:35:07.619716   37511 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 22:35:07.619740   37511 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:35:07.619750   37511 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:35:07.620107   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:07.620150   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:07.634715   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0831 22:35:07.635067   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:07.635580   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:07.635605   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:07.635898   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:07.636062   37511 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:35:07.637455   37511 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:35:07.637473   37511 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:07.637772   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:07.637803   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:07.652384   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0831 22:35:07.652784   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:07.653242   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:07.653269   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:07.653529   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:07.653707   37511 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:35:07.656245   37511 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:07.656611   37511 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:07.656629   37511 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:07.656766   37511 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:07.657059   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:07.657096   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:07.672158   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I0831 22:35:07.672542   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:07.673065   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:07.673087   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:07.673380   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:07.673537   37511 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:35:07.673706   37511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:07.673722   37511 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:35:07.676435   37511 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:07.676786   37511 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:07.676817   37511 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:07.676938   37511 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:35:07.677073   37511 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:35:07.677227   37511 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:35:07.677354   37511 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:35:07.755306   37511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:07.770046   37511 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:07.770074   37511 api_server.go:166] Checking apiserver status ...
	I0831 22:35:07.770106   37511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:07.783829   37511 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:35:07.793275   37511 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:07.793326   37511 ssh_runner.go:195] Run: ls
	I0831 22:35:07.797813   37511 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:07.804942   37511 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:07.804963   37511 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:35:07.804975   37511 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:07.804996   37511 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:35:07.805368   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:07.805406   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:07.820232   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0831 22:35:07.820702   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:07.821173   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:07.821200   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:07.821496   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:07.821691   37511 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:07.823112   37511 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:35:07.823129   37511 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:07.823500   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:07.823537   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:07.837984   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33753
	I0831 22:35:07.838372   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:07.838873   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:07.838896   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:07.839221   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:07.839440   37511 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:35:07.841961   37511 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:07.842357   37511 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:07.842401   37511 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:07.842553   37511 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:07.842921   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:07.842966   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:07.857840   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0831 22:35:07.858192   37511 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:07.858663   37511 main.go:141] libmachine: Using API Version  1
	I0831 22:35:07.858682   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:07.859024   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:07.859248   37511 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:35:07.859473   37511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:07.859496   37511 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:35:07.862301   37511 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:07.862791   37511 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:07.862817   37511 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:07.863036   37511 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:35:07.863227   37511 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:35:07.863395   37511 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:35:07.863548   37511 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:35:07.939220   37511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:07.955119   37511 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 3 (3.72832458s)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:35:12.686870   37627 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:35:12.686978   37627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:12.686987   37627 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:12.686991   37627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:12.687168   37627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:35:12.687321   37627 out.go:352] Setting JSON to false
	I0831 22:35:12.687386   37627 mustload.go:65] Loading cluster: ha-957517
	I0831 22:35:12.687488   37627 notify.go:220] Checking for updates...
	I0831 22:35:12.687791   37627 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:35:12.687804   37627 status.go:255] checking status of ha-957517 ...
	I0831 22:35:12.688208   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:12.688260   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:12.706625   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0831 22:35:12.707035   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:12.707574   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:12.707621   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:12.707900   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:12.708054   37627 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:35:12.709540   37627 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:35:12.709556   37627 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:12.709914   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:12.709949   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:12.724457   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0831 22:35:12.724840   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:12.725271   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:12.725289   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:12.725667   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:12.725874   37627 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:35:12.728773   37627 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:12.729138   37627 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:12.729165   37627 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:12.729295   37627 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:12.729600   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:12.729639   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:12.744537   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0831 22:35:12.744936   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:12.745358   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:12.745375   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:12.745641   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:12.745799   37627 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:35:12.745975   37627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:12.746007   37627 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:35:12.748551   37627 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:12.748889   37627 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:12.748923   37627 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:12.749043   37627 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:35:12.749217   37627 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:35:12.749355   37627 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:35:12.749475   37627 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:35:12.831662   37627 ssh_runner.go:195] Run: systemctl --version
	I0831 22:35:12.837970   37627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:12.859574   37627 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:12.859613   37627 api_server.go:166] Checking apiserver status ...
	I0831 22:35:12.859657   37627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:12.880491   37627 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:35:12.891228   37627 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:12.891279   37627 ssh_runner.go:195] Run: ls
	I0831 22:35:12.895970   37627 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:12.901007   37627 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:12.901033   37627 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:35:12.901045   37627 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:12.901076   37627 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:35:12.901397   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:12.901451   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:12.918536   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42191
	I0831 22:35:12.919014   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:12.919586   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:12.919609   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:12.919905   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:12.920098   37627 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:35:12.921809   37627 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:35:12.921828   37627 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:35:12.922160   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:12.922198   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:12.937179   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0831 22:35:12.937555   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:12.938059   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:12.938084   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:12.938420   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:12.938610   37627 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:35:12.941458   37627 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:12.941811   37627 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:35:12.941842   37627 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:12.941966   37627 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:35:12.942320   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:12.942361   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:12.957021   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0831 22:35:12.957470   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:12.957895   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:12.957916   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:12.958232   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:12.958430   37627 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:35:12.958658   37627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:12.958678   37627 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:35:12.961627   37627 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:12.961947   37627 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:35:12.961966   37627 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:35:12.962154   37627 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:35:12.962331   37627 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:35:12.962499   37627 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:35:12.962648   37627 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	W0831 22:35:16.035595   37627 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:35:16.035710   37627 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0831 22:35:16.035733   37627 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:35:16.035755   37627 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 22:35:16.035774   37627 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:35:16.035784   37627 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:35:16.036177   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:16.036238   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:16.050535   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I0831 22:35:16.050907   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:16.051488   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:16.051513   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:16.051823   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:16.052010   37627 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:35:16.053408   37627 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:35:16.053422   37627 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:16.053802   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:16.053863   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:16.070430   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0831 22:35:16.070814   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:16.071393   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:16.071414   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:16.071689   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:16.071890   37627 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:35:16.074324   37627 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:16.074688   37627 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:16.074710   37627 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:16.074836   37627 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:16.075140   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:16.075173   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:16.090694   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0831 22:35:16.091148   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:16.091627   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:16.091649   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:16.091938   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:16.092123   37627 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:35:16.092317   37627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:16.092342   37627 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:35:16.095149   37627 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:16.095507   37627 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:16.095528   37627 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:16.095661   37627 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:35:16.095815   37627 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:35:16.095950   37627 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:35:16.096147   37627 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:35:16.174666   37627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:16.189258   37627 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:16.189285   37627 api_server.go:166] Checking apiserver status ...
	I0831 22:35:16.189339   37627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:16.202055   37627 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:35:16.212406   37627 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:16.212461   37627 ssh_runner.go:195] Run: ls
	I0831 22:35:16.217734   37627 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:16.222353   37627 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:16.222384   37627 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:35:16.222392   37627 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:16.222405   37627 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:35:16.222815   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:16.222860   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:16.237764   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0831 22:35:16.238188   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:16.238618   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:16.238640   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:16.238978   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:16.239137   37627 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:16.240654   37627 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:35:16.240672   37627 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:16.241081   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:16.241125   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:16.255561   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0831 22:35:16.256084   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:16.256594   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:16.256612   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:16.256912   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:16.257244   37627 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:35:16.260056   37627 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:16.260456   37627 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:16.260488   37627 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:16.260628   37627 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:16.261060   37627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:16.261135   37627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:16.276458   37627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0831 22:35:16.276857   37627 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:16.277392   37627 main.go:141] libmachine: Using API Version  1
	I0831 22:35:16.277418   37627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:16.277756   37627 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:16.277945   37627 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:35:16.278113   37627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:16.278128   37627 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:35:16.281007   37627 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:16.281420   37627 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:16.281446   37627 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:16.281617   37627 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:35:16.281764   37627 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:35:16.281882   37627 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:35:16.282015   37627 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:35:16.358549   37627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:16.373067   37627 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 7 (779.038117ms)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:35:22.267449   37755 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:35:22.267566   37755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:22.267575   37755 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:22.267579   37755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:22.267776   37755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:35:22.267976   37755 out.go:352] Setting JSON to false
	I0831 22:35:22.268002   37755 mustload.go:65] Loading cluster: ha-957517
	I0831 22:35:22.268041   37755 notify.go:220] Checking for updates...
	I0831 22:35:22.268392   37755 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:35:22.268407   37755 status.go:255] checking status of ha-957517 ...
	I0831 22:35:22.268859   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.268904   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.289025   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37439
	I0831 22:35:22.289420   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.290130   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.290151   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.290512   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.290741   37755 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:35:22.292179   37755 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:35:22.292196   37755 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:22.292483   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.292518   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.308318   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0831 22:35:22.308781   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.309244   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.309266   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.309555   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.309723   37755 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:35:22.312312   37755 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:22.312720   37755 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:22.312750   37755 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:22.312860   37755 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:22.313156   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.313192   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.327276   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36069
	I0831 22:35:22.327667   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.328117   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.328141   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.328525   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.328826   37755 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:35:22.333758   37755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:22.333792   37755 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:35:22.336649   37755 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:22.337126   37755 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:22.337152   37755 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:22.340253   37755 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:35:22.340668   37755 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:35:22.340865   37755 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:35:22.341052   37755 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:35:22.430063   37755 ssh_runner.go:195] Run: systemctl --version
	I0831 22:35:22.436228   37755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:22.451191   37755 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:22.451219   37755 api_server.go:166] Checking apiserver status ...
	I0831 22:35:22.451248   37755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:22.466248   37755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:35:22.475672   37755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:22.475709   37755 ssh_runner.go:195] Run: ls
	I0831 22:35:22.479843   37755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:22.483816   37755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:22.483837   37755 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:35:22.483847   37755 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:22.483863   37755 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:35:22.484223   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.484261   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.499233   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I0831 22:35:22.499650   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.500101   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.500121   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.500446   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.500633   37755 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:35:22.669888   37755 status.go:330] ha-957517-m02 host status = "Stopped" (err=<nil>)
	I0831 22:35:22.669905   37755 status.go:343] host is not running, skipping remaining checks
	I0831 22:35:22.669912   37755 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:22.669938   37755 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:35:22.670206   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.670261   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.684926   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0831 22:35:22.685415   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.685935   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.685956   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.686236   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.686423   37755 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:35:22.688249   37755 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:35:22.688266   37755 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:22.688552   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.688590   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.704155   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0831 22:35:22.704625   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.705071   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.705090   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.705377   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.705526   37755 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:35:22.708360   37755 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:22.708722   37755 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:22.708761   37755 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:22.708854   37755 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:22.709147   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.709183   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.723444   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0831 22:35:22.723786   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.724192   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.724211   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.724507   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.724678   37755 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:35:22.724868   37755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:22.724891   37755 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:35:22.727671   37755 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:22.728039   37755 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:22.728063   37755 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:22.728174   37755 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:35:22.728358   37755 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:35:22.728519   37755 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:35:22.728658   37755 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:35:22.806643   37755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:22.821126   37755 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:22.821150   37755 api_server.go:166] Checking apiserver status ...
	I0831 22:35:22.821184   37755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:22.835418   37755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:35:22.844586   37755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:22.844642   37755 ssh_runner.go:195] Run: ls
	I0831 22:35:22.850027   37755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:22.854900   37755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:22.854920   37755 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:35:22.854928   37755 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:22.854952   37755 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:35:22.855233   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.855264   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.870490   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0831 22:35:22.870869   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.871310   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.871351   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.871684   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.871882   37755 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:22.873496   37755 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:35:22.873511   37755 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:22.873808   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.873858   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.888942   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I0831 22:35:22.889446   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.889854   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.889876   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.890217   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.890407   37755 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:35:22.893272   37755 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:22.893744   37755 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:22.893768   37755 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:22.893919   37755 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:22.894260   37755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:22.894294   37755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:22.908754   37755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0831 22:35:22.909170   37755 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:22.909632   37755 main.go:141] libmachine: Using API Version  1
	I0831 22:35:22.909651   37755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:22.909908   37755 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:22.910057   37755 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:35:22.910221   37755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:22.910239   37755 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:35:22.912696   37755 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:22.913102   37755 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:22.913123   37755 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:22.913273   37755 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:35:22.913447   37755 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:35:22.913668   37755 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:35:22.913807   37755 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:35:22.990543   37755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:23.005510   37755 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 7 (617.130511ms)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:35:29.660708   37853 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:35:29.660937   37853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:29.660945   37853 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:29.660950   37853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:29.661133   37853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:35:29.661282   37853 out.go:352] Setting JSON to false
	I0831 22:35:29.661305   37853 mustload.go:65] Loading cluster: ha-957517
	I0831 22:35:29.661351   37853 notify.go:220] Checking for updates...
	I0831 22:35:29.661664   37853 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:35:29.661677   37853 status.go:255] checking status of ha-957517 ...
	I0831 22:35:29.662031   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:29.662085   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:29.682702   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41557
	I0831 22:35:29.683125   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:29.683713   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:29.683735   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:29.684095   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:29.684353   37853 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:35:29.685880   37853 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:35:29.685895   37853 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:29.686205   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:29.686249   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:29.701159   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0831 22:35:29.701535   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:29.701971   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:29.701996   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:29.702299   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:29.702473   37853 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:35:29.704938   37853 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:29.705356   37853 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:29.705397   37853 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:29.705447   37853 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:29.705729   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:29.705762   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:29.720689   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0831 22:35:29.721171   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:29.721665   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:29.721690   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:29.722010   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:29.722189   37853 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:35:29.722377   37853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:29.722413   37853 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:35:29.724901   37853 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:29.725316   37853 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:29.725346   37853 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:29.725486   37853 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:35:29.725636   37853 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:35:29.725740   37853 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:35:29.725821   37853 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:35:29.803586   37853 ssh_runner.go:195] Run: systemctl --version
	I0831 22:35:29.809980   37853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:29.825610   37853 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:29.825645   37853 api_server.go:166] Checking apiserver status ...
	I0831 22:35:29.825691   37853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:29.840287   37853 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:35:29.850011   37853 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:29.850069   37853 ssh_runner.go:195] Run: ls
	I0831 22:35:29.854907   37853 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:29.860052   37853 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:29.860073   37853 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:35:29.860083   37853 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:29.860100   37853 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:35:29.860428   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:29.860463   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:29.876448   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41465
	I0831 22:35:29.876884   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:29.877497   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:29.877516   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:29.877837   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:29.878053   37853 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:35:29.879659   37853 status.go:330] ha-957517-m02 host status = "Stopped" (err=<nil>)
	I0831 22:35:29.879671   37853 status.go:343] host is not running, skipping remaining checks
	I0831 22:35:29.879678   37853 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:29.879699   37853 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:35:29.880041   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:29.880082   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:29.894924   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42841
	I0831 22:35:29.895400   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:29.895895   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:29.895916   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:29.896195   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:29.896404   37853 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:35:29.898213   37853 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:35:29.898232   37853 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:29.898588   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:29.898623   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:29.913451   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I0831 22:35:29.913866   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:29.914308   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:29.914327   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:29.914603   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:29.914791   37853 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:35:29.917349   37853 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:29.917760   37853 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:29.917797   37853 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:29.917917   37853 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:29.918231   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:29.918265   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:29.933811   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I0831 22:35:29.934259   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:29.934763   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:29.934785   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:29.935085   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:29.935281   37853 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:35:29.935497   37853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:29.935514   37853 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:35:29.938256   37853 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:29.938691   37853 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:29.938714   37853 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:29.938841   37853 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:35:29.939069   37853 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:35:29.939251   37853 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:35:29.939438   37853 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:35:30.020243   37853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:30.038302   37853 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:30.038336   37853 api_server.go:166] Checking apiserver status ...
	I0831 22:35:30.038378   37853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:30.056265   37853 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:35:30.069129   37853 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:30.069188   37853 ssh_runner.go:195] Run: ls
	I0831 22:35:30.074708   37853 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:30.079139   37853 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:30.079165   37853 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:35:30.079174   37853 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:30.079188   37853 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:35:30.079583   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:30.079618   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:30.094673   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0831 22:35:30.095155   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:30.095684   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:30.095712   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:30.096099   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:30.096281   37853 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:30.097659   37853 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:35:30.097675   37853 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:30.098059   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:30.098103   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:30.113109   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I0831 22:35:30.113643   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:30.114259   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:30.114284   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:30.114669   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:30.114851   37853 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:35:30.117785   37853 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:30.118221   37853 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:30.118248   37853 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:30.118450   37853 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:30.118736   37853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:30.118771   37853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:30.134403   37853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0831 22:35:30.134983   37853 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:30.135496   37853 main.go:141] libmachine: Using API Version  1
	I0831 22:35:30.135518   37853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:30.135839   37853 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:30.136020   37853 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:35:30.136205   37853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:30.136220   37853 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:35:30.139159   37853 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:30.139619   37853 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:30.139644   37853 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:30.139786   37853 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:35:30.139962   37853 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:35:30.140088   37853 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:35:30.140236   37853 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:35:30.222798   37853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:30.237728   37853 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 7 (620.327779ms)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-957517-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:35:38.705966   37957 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:35:38.706106   37957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:38.706117   37957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:38.706122   37957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:38.706334   37957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:35:38.706527   37957 out.go:352] Setting JSON to false
	I0831 22:35:38.706555   37957 mustload.go:65] Loading cluster: ha-957517
	I0831 22:35:38.706592   37957 notify.go:220] Checking for updates...
	I0831 22:35:38.706948   37957 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:35:38.706964   37957 status.go:255] checking status of ha-957517 ...
	I0831 22:35:38.707409   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:38.707476   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:38.725399   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43159
	I0831 22:35:38.725785   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:38.726405   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:38.726437   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:38.726740   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:38.726931   37957 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:35:38.728426   37957 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:35:38.728444   37957 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:38.728761   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:38.728812   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:38.743266   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0831 22:35:38.743673   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:38.744118   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:38.744135   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:38.744423   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:38.744618   37957 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:35:38.747579   37957 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:38.747972   37957 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:38.747998   37957 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:38.748161   37957 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:35:38.748443   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:38.748474   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:38.762893   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0831 22:35:38.763295   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:38.763745   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:38.763764   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:38.764073   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:38.764262   37957 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:35:38.764466   37957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:38.764495   37957 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:35:38.767045   37957 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:38.767482   37957 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:35:38.767514   37957 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:35:38.767633   37957 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:35:38.767776   37957 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:35:38.767902   37957 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:35:38.768059   37957 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:35:38.847787   37957 ssh_runner.go:195] Run: systemctl --version
	I0831 22:35:38.856825   37957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:38.877814   37957 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:38.877854   37957 api_server.go:166] Checking apiserver status ...
	I0831 22:35:38.877909   37957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:38.893712   37957 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0831 22:35:38.903410   37957 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:38.903473   37957 ssh_runner.go:195] Run: ls
	I0831 22:35:38.908397   37957 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:38.912635   37957 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:38.912654   37957 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:35:38.912667   37957 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:38.912686   37957 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:35:38.913057   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:38.913095   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:38.927982   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0831 22:35:38.928336   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:38.928745   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:38.928762   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:38.929079   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:38.929320   37957 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:35:38.930871   37957 status.go:330] ha-957517-m02 host status = "Stopped" (err=<nil>)
	I0831 22:35:38.930887   37957 status.go:343] host is not running, skipping remaining checks
	I0831 22:35:38.930895   37957 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:38.930916   37957 status.go:255] checking status of ha-957517-m03 ...
	I0831 22:35:38.931203   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:38.931239   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:38.946361   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0831 22:35:38.946750   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:38.947215   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:38.947237   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:38.947523   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:38.947684   37957 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:35:38.949118   37957 status.go:330] ha-957517-m03 host status = "Running" (err=<nil>)
	I0831 22:35:38.949135   37957 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:38.949492   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:38.949526   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:38.963490   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0831 22:35:38.963848   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:38.964315   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:38.964348   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:38.964676   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:38.964860   37957 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:35:38.967134   37957 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:38.967505   37957 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:38.967529   37957 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:38.967660   37957 host.go:66] Checking if "ha-957517-m03" exists ...
	I0831 22:35:38.967940   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:38.967971   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:38.982725   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I0831 22:35:38.983081   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:38.983553   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:38.983575   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:38.983839   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:38.984036   37957 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:35:38.984222   37957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:38.984241   37957 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:35:38.986929   37957 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:38.987381   37957 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:38.987412   37957 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:38.987543   37957 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:35:38.987689   37957 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:35:38.987832   37957 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:35:38.987979   37957 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:35:39.074173   37957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:39.092916   37957 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:35:39.092948   37957 api_server.go:166] Checking apiserver status ...
	I0831 22:35:39.092986   37957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:39.110545   37957 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0831 22:35:39.123283   37957 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:35:39.123353   37957 ssh_runner.go:195] Run: ls
	I0831 22:35:39.127543   37957 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:35:39.131814   37957 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:35:39.131835   37957 status.go:422] ha-957517-m03 apiserver status = Running (err=<nil>)
	I0831 22:35:39.131843   37957 status.go:257] ha-957517-m03 status: &{Name:ha-957517-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:39.131866   37957 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:35:39.132162   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:39.132192   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:39.146561   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0831 22:35:39.147034   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:39.147554   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:39.147572   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:39.147934   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:39.148136   37957 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:39.149674   37957 status.go:330] ha-957517-m04 host status = "Running" (err=<nil>)
	I0831 22:35:39.149690   37957 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:39.149997   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:39.150034   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:39.164956   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0831 22:35:39.165348   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:39.165809   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:39.165831   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:39.166144   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:39.166313   37957 main.go:141] libmachine: (ha-957517-m04) Calling .GetIP
	I0831 22:35:39.168847   37957 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:39.169281   37957 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:39.169312   37957 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:39.169426   37957 host.go:66] Checking if "ha-957517-m04" exists ...
	I0831 22:35:39.169728   37957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:39.169760   37957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:39.184517   37957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0831 22:35:39.184884   37957 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:39.185350   37957 main.go:141] libmachine: Using API Version  1
	I0831 22:35:39.185371   37957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:39.185692   37957 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:39.185920   37957 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:35:39.186121   37957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:35:39.186163   37957 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:35:39.189103   37957 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:39.189578   37957 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:39.189599   37957 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:39.189706   37957 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:35:39.189839   37957 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:35:39.189963   37957 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:35:39.190155   37957 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:35:39.268486   37957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:39.283772   37957 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-957517 -n ha-957517
helpers_test.go:245: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p ha-957517 logs -n 25: (1.475589893s)
helpers_test.go:253: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517:/home/docker/cp-test_ha-957517-m03_ha-957517.txt                       |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517 sudo cat                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517.txt                                 |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m04 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp testdata/cp-test.txt                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517:/home/docker/cp-test_ha-957517-m04_ha-957517.txt                       |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517 sudo cat                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517.txt                                 |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03:/home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m03 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-957517 node stop m02 -v=7                                                     | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-957517 node start m02 -v=7                                                    | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:27:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:27:40.945802   32390 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:27:40.946098   32390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:27:40.946108   32390 out.go:358] Setting ErrFile to fd 2...
	I0831 22:27:40.946113   32390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:27:40.946301   32390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:27:40.946906   32390 out.go:352] Setting JSON to false
	I0831 22:27:40.947799   32390 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4208,"bootTime":1725139053,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:27:40.947860   32390 start.go:139] virtualization: kvm guest
	I0831 22:27:40.950113   32390 out.go:177] * [ha-957517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:27:40.951503   32390 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:27:40.951553   32390 notify.go:220] Checking for updates...
	I0831 22:27:40.953810   32390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:27:40.955161   32390 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:27:40.956489   32390 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:27:40.957570   32390 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:27:40.958683   32390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:27:40.959945   32390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:27:40.994663   32390 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 22:27:40.995889   32390 start.go:297] selected driver: kvm2
	I0831 22:27:40.995904   32390 start.go:901] validating driver "kvm2" against <nil>
	I0831 22:27:40.995914   32390 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:27:40.996570   32390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:27:40.996662   32390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:27:41.011574   32390 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:27:41.011620   32390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:27:41.011870   32390 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:27:41.011898   32390 cni.go:84] Creating CNI manager for ""
	I0831 22:27:41.011904   32390 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0831 22:27:41.011910   32390 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:27:41.011960   32390 start.go:340] cluster config:
	{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0831 22:27:41.012059   32390 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:27:41.013820   32390 out.go:177] * Starting "ha-957517" primary control-plane node in "ha-957517" cluster
	I0831 22:27:41.015021   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:27:41.015059   32390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:27:41.015076   32390 cache.go:56] Caching tarball of preloaded images
	I0831 22:27:41.015179   32390 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:27:41.015193   32390 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:27:41.015592   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:27:41.015616   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json: {Name:mkff77987e3b2e05fabfb3dbe17ba9d399f610a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:27:41.015772   32390 start.go:360] acquireMachinesLock for ha-957517: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:27:41.015808   32390 start.go:364] duration metric: took 16.854µs to acquireMachinesLock for "ha-957517"
	I0831 22:27:41.015824   32390 start.go:93] Provisioning new machine with config: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:27:41.015873   32390 start.go:125] createHost starting for "" (driver="kvm2")
	I0831 22:27:41.017441   32390 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 22:27:41.017559   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:27:41.017595   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:27:41.031812   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0831 22:27:41.032223   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:27:41.032790   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:27:41.032813   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:27:41.033097   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:27:41.033258   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:27:41.033483   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:27:41.033646   32390 start.go:159] libmachine.API.Create for "ha-957517" (driver="kvm2")
	I0831 22:27:41.033711   32390 client.go:168] LocalClient.Create starting
	I0831 22:27:41.033744   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:27:41.033772   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:27:41.033785   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:27:41.033833   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:27:41.033851   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:27:41.033865   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:27:41.033882   32390 main.go:141] libmachine: Running pre-create checks...
	I0831 22:27:41.033891   32390 main.go:141] libmachine: (ha-957517) Calling .PreCreateCheck
	I0831 22:27:41.034216   32390 main.go:141] libmachine: (ha-957517) Calling .GetConfigRaw
	I0831 22:27:41.034559   32390 main.go:141] libmachine: Creating machine...
	I0831 22:27:41.034577   32390 main.go:141] libmachine: (ha-957517) Calling .Create
	I0831 22:27:41.034714   32390 main.go:141] libmachine: (ha-957517) Creating KVM machine...
	I0831 22:27:41.035870   32390 main.go:141] libmachine: (ha-957517) DBG | found existing default KVM network
	I0831 22:27:41.036537   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.036401   32413 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0831 22:27:41.036573   32390 main.go:141] libmachine: (ha-957517) DBG | created network xml: 
	I0831 22:27:41.036593   32390 main.go:141] libmachine: (ha-957517) DBG | <network>
	I0831 22:27:41.036601   32390 main.go:141] libmachine: (ha-957517) DBG |   <name>mk-ha-957517</name>
	I0831 22:27:41.036611   32390 main.go:141] libmachine: (ha-957517) DBG |   <dns enable='no'/>
	I0831 22:27:41.036634   32390 main.go:141] libmachine: (ha-957517) DBG |   
	I0831 22:27:41.036671   32390 main.go:141] libmachine: (ha-957517) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0831 22:27:41.036685   32390 main.go:141] libmachine: (ha-957517) DBG |     <dhcp>
	I0831 22:27:41.036697   32390 main.go:141] libmachine: (ha-957517) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0831 22:27:41.036713   32390 main.go:141] libmachine: (ha-957517) DBG |     </dhcp>
	I0831 22:27:41.036728   32390 main.go:141] libmachine: (ha-957517) DBG |   </ip>
	I0831 22:27:41.036777   32390 main.go:141] libmachine: (ha-957517) DBG |   
	I0831 22:27:41.036799   32390 main.go:141] libmachine: (ha-957517) DBG | </network>
	I0831 22:27:41.036812   32390 main.go:141] libmachine: (ha-957517) DBG | 
	I0831 22:27:41.041570   32390 main.go:141] libmachine: (ha-957517) DBG | trying to create private KVM network mk-ha-957517 192.168.39.0/24...
	I0831 22:27:41.113674   32390 main.go:141] libmachine: (ha-957517) DBG | private KVM network mk-ha-957517 192.168.39.0/24 created
	I0831 22:27:41.113698   32390 main.go:141] libmachine: (ha-957517) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517 ...
	I0831 22:27:41.113715   32390 main.go:141] libmachine: (ha-957517) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:27:41.113758   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.113687   32413 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:27:41.113858   32390 main.go:141] libmachine: (ha-957517) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:27:41.352403   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.352292   32413 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa...
	I0831 22:27:41.479076   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.478918   32413 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/ha-957517.rawdisk...
	I0831 22:27:41.479112   32390 main.go:141] libmachine: (ha-957517) DBG | Writing magic tar header
	I0831 22:27:41.479128   32390 main.go:141] libmachine: (ha-957517) DBG | Writing SSH key tar header
	I0831 22:27:41.479141   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:41.479036   32413 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517 ...
	I0831 22:27:41.479154   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517
	I0831 22:27:41.479160   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:27:41.479175   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:27:41.479182   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:27:41.479196   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517 (perms=drwx------)
	I0831 22:27:41.479206   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:27:41.479221   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:27:41.479232   32390 main.go:141] libmachine: (ha-957517) DBG | Checking permissions on dir: /home
	I0831 22:27:41.479244   32390 main.go:141] libmachine: (ha-957517) DBG | Skipping /home - not owner
	I0831 22:27:41.479254   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:27:41.479260   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:27:41.479270   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:27:41.479278   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:27:41.479289   32390 main.go:141] libmachine: (ha-957517) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:27:41.479297   32390 main.go:141] libmachine: (ha-957517) Creating domain...
	I0831 22:27:41.480432   32390 main.go:141] libmachine: (ha-957517) define libvirt domain using xml: 
	I0831 22:27:41.480461   32390 main.go:141] libmachine: (ha-957517) <domain type='kvm'>
	I0831 22:27:41.480472   32390 main.go:141] libmachine: (ha-957517)   <name>ha-957517</name>
	I0831 22:27:41.480479   32390 main.go:141] libmachine: (ha-957517)   <memory unit='MiB'>2200</memory>
	I0831 22:27:41.480488   32390 main.go:141] libmachine: (ha-957517)   <vcpu>2</vcpu>
	I0831 22:27:41.480501   32390 main.go:141] libmachine: (ha-957517)   <features>
	I0831 22:27:41.480510   32390 main.go:141] libmachine: (ha-957517)     <acpi/>
	I0831 22:27:41.480520   32390 main.go:141] libmachine: (ha-957517)     <apic/>
	I0831 22:27:41.480528   32390 main.go:141] libmachine: (ha-957517)     <pae/>
	I0831 22:27:41.480549   32390 main.go:141] libmachine: (ha-957517)     
	I0831 22:27:41.480558   32390 main.go:141] libmachine: (ha-957517)   </features>
	I0831 22:27:41.480566   32390 main.go:141] libmachine: (ha-957517)   <cpu mode='host-passthrough'>
	I0831 22:27:41.480596   32390 main.go:141] libmachine: (ha-957517)   
	I0831 22:27:41.480617   32390 main.go:141] libmachine: (ha-957517)   </cpu>
	I0831 22:27:41.480629   32390 main.go:141] libmachine: (ha-957517)   <os>
	I0831 22:27:41.480640   32390 main.go:141] libmachine: (ha-957517)     <type>hvm</type>
	I0831 22:27:41.480651   32390 main.go:141] libmachine: (ha-957517)     <boot dev='cdrom'/>
	I0831 22:27:41.480660   32390 main.go:141] libmachine: (ha-957517)     <boot dev='hd'/>
	I0831 22:27:41.480666   32390 main.go:141] libmachine: (ha-957517)     <bootmenu enable='no'/>
	I0831 22:27:41.480673   32390 main.go:141] libmachine: (ha-957517)   </os>
	I0831 22:27:41.480680   32390 main.go:141] libmachine: (ha-957517)   <devices>
	I0831 22:27:41.480692   32390 main.go:141] libmachine: (ha-957517)     <disk type='file' device='cdrom'>
	I0831 22:27:41.480708   32390 main.go:141] libmachine: (ha-957517)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/boot2docker.iso'/>
	I0831 22:27:41.480723   32390 main.go:141] libmachine: (ha-957517)       <target dev='hdc' bus='scsi'/>
	I0831 22:27:41.480742   32390 main.go:141] libmachine: (ha-957517)       <readonly/>
	I0831 22:27:41.480755   32390 main.go:141] libmachine: (ha-957517)     </disk>
	I0831 22:27:41.480769   32390 main.go:141] libmachine: (ha-957517)     <disk type='file' device='disk'>
	I0831 22:27:41.480781   32390 main.go:141] libmachine: (ha-957517)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:27:41.480796   32390 main.go:141] libmachine: (ha-957517)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/ha-957517.rawdisk'/>
	I0831 22:27:41.480808   32390 main.go:141] libmachine: (ha-957517)       <target dev='hda' bus='virtio'/>
	I0831 22:27:41.480817   32390 main.go:141] libmachine: (ha-957517)     </disk>
	I0831 22:27:41.480826   32390 main.go:141] libmachine: (ha-957517)     <interface type='network'>
	I0831 22:27:41.480855   32390 main.go:141] libmachine: (ha-957517)       <source network='mk-ha-957517'/>
	I0831 22:27:41.480878   32390 main.go:141] libmachine: (ha-957517)       <model type='virtio'/>
	I0831 22:27:41.480888   32390 main.go:141] libmachine: (ha-957517)     </interface>
	I0831 22:27:41.480898   32390 main.go:141] libmachine: (ha-957517)     <interface type='network'>
	I0831 22:27:41.480911   32390 main.go:141] libmachine: (ha-957517)       <source network='default'/>
	I0831 22:27:41.480922   32390 main.go:141] libmachine: (ha-957517)       <model type='virtio'/>
	I0831 22:27:41.480933   32390 main.go:141] libmachine: (ha-957517)     </interface>
	I0831 22:27:41.480944   32390 main.go:141] libmachine: (ha-957517)     <serial type='pty'>
	I0831 22:27:41.480962   32390 main.go:141] libmachine: (ha-957517)       <target port='0'/>
	I0831 22:27:41.480978   32390 main.go:141] libmachine: (ha-957517)     </serial>
	I0831 22:27:41.480999   32390 main.go:141] libmachine: (ha-957517)     <console type='pty'>
	I0831 22:27:41.481009   32390 main.go:141] libmachine: (ha-957517)       <target type='serial' port='0'/>
	I0831 22:27:41.481018   32390 main.go:141] libmachine: (ha-957517)     </console>
	I0831 22:27:41.481039   32390 main.go:141] libmachine: (ha-957517)     <rng model='virtio'>
	I0831 22:27:41.481052   32390 main.go:141] libmachine: (ha-957517)       <backend model='random'>/dev/random</backend>
	I0831 22:27:41.481066   32390 main.go:141] libmachine: (ha-957517)     </rng>
	I0831 22:27:41.481085   32390 main.go:141] libmachine: (ha-957517)     
	I0831 22:27:41.481091   32390 main.go:141] libmachine: (ha-957517)     
	I0831 22:27:41.481101   32390 main.go:141] libmachine: (ha-957517)   </devices>
	I0831 22:27:41.481110   32390 main.go:141] libmachine: (ha-957517) </domain>
	I0831 22:27:41.481125   32390 main.go:141] libmachine: (ha-957517) 
	I0831 22:27:41.485236   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:1a:2a:50 in network default
	I0831 22:27:41.485771   32390 main.go:141] libmachine: (ha-957517) Ensuring networks are active...
	I0831 22:27:41.485792   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:41.486524   32390 main.go:141] libmachine: (ha-957517) Ensuring network default is active
	I0831 22:27:41.486828   32390 main.go:141] libmachine: (ha-957517) Ensuring network mk-ha-957517 is active
	I0831 22:27:41.487389   32390 main.go:141] libmachine: (ha-957517) Getting domain xml...
	I0831 22:27:41.488032   32390 main.go:141] libmachine: (ha-957517) Creating domain...
	I0831 22:27:42.668686   32390 main.go:141] libmachine: (ha-957517) Waiting to get IP...
	I0831 22:27:42.669539   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:42.669902   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:42.669946   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:42.669902   32413 retry.go:31] will retry after 310.308268ms: waiting for machine to come up
	I0831 22:27:42.981397   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:42.981861   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:42.981881   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:42.981820   32413 retry.go:31] will retry after 344.443306ms: waiting for machine to come up
	I0831 22:27:43.328335   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:43.328772   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:43.328794   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:43.328726   32413 retry.go:31] will retry after 365.569469ms: waiting for machine to come up
	I0831 22:27:43.696166   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:43.696619   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:43.696647   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:43.696574   32413 retry.go:31] will retry after 401.219481ms: waiting for machine to come up
	I0831 22:27:44.099095   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:44.099616   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:44.099645   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:44.099568   32413 retry.go:31] will retry after 481.487587ms: waiting for machine to come up
	I0831 22:27:44.583472   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:44.583852   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:44.583880   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:44.583807   32413 retry.go:31] will retry after 687.283133ms: waiting for machine to come up
	I0831 22:27:45.272575   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:45.272996   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:45.273036   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:45.272916   32413 retry.go:31] will retry after 1.085305512s: waiting for machine to come up
	I0831 22:27:46.359260   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:46.359786   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:46.359814   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:46.359737   32413 retry.go:31] will retry after 1.165071673s: waiting for machine to come up
	I0831 22:27:47.526987   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:47.527401   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:47.527434   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:47.527370   32413 retry.go:31] will retry after 1.255910404s: waiting for machine to come up
	I0831 22:27:48.784746   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:48.785208   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:48.785237   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:48.785174   32413 retry.go:31] will retry after 2.245132247s: waiting for machine to come up
	I0831 22:27:51.033508   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:51.033946   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:51.033972   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:51.033914   32413 retry.go:31] will retry after 1.78980009s: waiting for machine to come up
	I0831 22:27:52.824792   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:52.825224   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:52.825251   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:52.825164   32413 retry.go:31] will retry after 2.949499003s: waiting for machine to come up
	I0831 22:27:55.776461   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:55.776812   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:55.776836   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:55.776778   32413 retry.go:31] will retry after 2.977555208s: waiting for machine to come up
	I0831 22:27:58.757418   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:27:58.757866   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find current IP address of domain ha-957517 in network mk-ha-957517
	I0831 22:27:58.757901   32390 main.go:141] libmachine: (ha-957517) DBG | I0831 22:27:58.757797   32413 retry.go:31] will retry after 4.155208137s: waiting for machine to come up
	I0831 22:28:02.915266   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.915647   32390 main.go:141] libmachine: (ha-957517) Found IP for machine: 192.168.39.137
	I0831 22:28:02.915669   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has current primary IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.915686   32390 main.go:141] libmachine: (ha-957517) Reserving static IP address...
	I0831 22:28:02.916095   32390 main.go:141] libmachine: (ha-957517) DBG | unable to find host DHCP lease matching {name: "ha-957517", mac: "52:54:00:e0:42:4f", ip: "192.168.39.137"} in network mk-ha-957517
	I0831 22:28:02.987594   32390 main.go:141] libmachine: (ha-957517) DBG | Getting to WaitForSSH function...
	I0831 22:28:02.987619   32390 main.go:141] libmachine: (ha-957517) Reserved static IP address: 192.168.39.137
	I0831 22:28:02.987631   32390 main.go:141] libmachine: (ha-957517) Waiting for SSH to be available...
	I0831 22:28:02.989870   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.990315   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:02.990355   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:02.990452   32390 main.go:141] libmachine: (ha-957517) DBG | Using SSH client type: external
	I0831 22:28:02.990478   32390 main.go:141] libmachine: (ha-957517) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa (-rw-------)
	I0831 22:28:02.990505   32390 main.go:141] libmachine: (ha-957517) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:28:02.990512   32390 main.go:141] libmachine: (ha-957517) DBG | About to run SSH command:
	I0831 22:28:02.990520   32390 main.go:141] libmachine: (ha-957517) DBG | exit 0
	I0831 22:28:03.111793   32390 main.go:141] libmachine: (ha-957517) DBG | SSH cmd err, output: <nil>: 
	I0831 22:28:03.112053   32390 main.go:141] libmachine: (ha-957517) KVM machine creation complete!
	I0831 22:28:03.112363   32390 main.go:141] libmachine: (ha-957517) Calling .GetConfigRaw
	I0831 22:28:03.112895   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:03.113083   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:03.113263   32390 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:28:03.113275   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:03.114482   32390 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:28:03.114501   32390 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:28:03.114506   32390 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:28:03.114512   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.116359   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.116653   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.116689   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.116785   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.116970   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.117100   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.117227   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.117377   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.117581   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.117595   32390 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:28:03.218736   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:03.218762   32390 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:28:03.218772   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.221652   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.222004   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.222029   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.222172   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.222366   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.222668   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.222832   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.223022   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.223200   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.223213   32390 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:28:03.324409   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:28:03.324513   32390 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:28:03.324523   32390 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:28:03.324530   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:28:03.324771   32390 buildroot.go:166] provisioning hostname "ha-957517"
	I0831 22:28:03.324797   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:28:03.324976   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.327800   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.328195   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.328222   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.328351   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.328546   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.328726   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.328850   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.329007   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.329250   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.329269   32390 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517 && echo "ha-957517" | sudo tee /etc/hostname
	I0831 22:28:03.446380   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:28:03.446408   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.448995   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.449406   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.449440   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.449618   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.449796   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.449947   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.450054   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.450247   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.450503   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.450525   32390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:28:03.560618   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:03.560652   32390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:28:03.560696   32390 buildroot.go:174] setting up certificates
	I0831 22:28:03.560711   32390 provision.go:84] configureAuth start
	I0831 22:28:03.560725   32390 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:28:03.560979   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:03.563370   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.563685   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.563723   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.563847   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.566002   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.566315   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.566337   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.566486   32390 provision.go:143] copyHostCerts
	I0831 22:28:03.566513   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:03.566555   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:28:03.566577   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:03.566654   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:28:03.566767   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:03.566792   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:28:03.566798   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:03.566831   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:28:03.566903   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:03.566928   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:28:03.566936   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:03.566969   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:28:03.567051   32390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517 san=[127.0.0.1 192.168.39.137 ha-957517 localhost minikube]
	I0831 22:28:03.720987   32390 provision.go:177] copyRemoteCerts
	I0831 22:28:03.721048   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:28:03.721087   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.723766   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.724157   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.724186   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.724393   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.724584   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.724739   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.724945   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:03.805712   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:28:03.805793   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:28:03.831097   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:28:03.831177   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0831 22:28:03.856577   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:28:03.856660   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:28:03.881472   32390 provision.go:87] duration metric: took 320.748156ms to configureAuth
	I0831 22:28:03.881495   32390 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:28:03.881686   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:03.881783   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:03.884343   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.884689   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:03.884714   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:03.884885   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:03.885065   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.885210   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:03.885359   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:03.885492   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:03.885703   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:03.885730   32390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:28:04.108228   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:28:04.108261   32390 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:28:04.108271   32390 main.go:141] libmachine: (ha-957517) Calling .GetURL
	I0831 22:28:04.109625   32390 main.go:141] libmachine: (ha-957517) DBG | Using libvirt version 6000000
	I0831 22:28:04.111887   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.112243   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.112267   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.112409   32390 main.go:141] libmachine: Docker is up and running!
	I0831 22:28:04.112419   32390 main.go:141] libmachine: Reticulating splines...
	I0831 22:28:04.112425   32390 client.go:171] duration metric: took 23.07870571s to LocalClient.Create
	I0831 22:28:04.112453   32390 start.go:167] duration metric: took 23.078815782s to libmachine.API.Create "ha-957517"
	I0831 22:28:04.112467   32390 start.go:293] postStartSetup for "ha-957517" (driver="kvm2")
	I0831 22:28:04.112480   32390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:28:04.112496   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.112750   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:28:04.112775   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.115036   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.115383   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.115412   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.115584   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.115787   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.115922   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.116081   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:04.198336   32390 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:28:04.202829   32390 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:28:04.202861   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:28:04.202932   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:28:04.203001   32390 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:28:04.203017   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:28:04.203150   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:28:04.212859   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:04.238294   32390 start.go:296] duration metric: took 125.815024ms for postStartSetup
	I0831 22:28:04.238369   32390 main.go:141] libmachine: (ha-957517) Calling .GetConfigRaw
	I0831 22:28:04.238895   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:04.241472   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.241847   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.241875   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.242112   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:04.242302   32390 start.go:128] duration metric: took 23.226421296s to createHost
	I0831 22:28:04.242341   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.244781   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.245093   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.245115   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.245245   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.245442   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.245622   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.245780   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.245944   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:04.246103   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:28:04.246117   32390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:28:04.348106   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143284.316550028
	
	I0831 22:28:04.348134   32390 fix.go:216] guest clock: 1725143284.316550028
	I0831 22:28:04.348145   32390 fix.go:229] Guest: 2024-08-31 22:28:04.316550028 +0000 UTC Remote: 2024-08-31 22:28:04.242320677 +0000 UTC m=+23.331086893 (delta=74.229351ms)
	I0831 22:28:04.348202   32390 fix.go:200] guest clock delta is within tolerance: 74.229351ms
	I0831 22:28:04.348212   32390 start.go:83] releasing machines lock for "ha-957517", held for 23.332394313s
	I0831 22:28:04.348252   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.348525   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:04.350920   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.351259   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.351283   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.351454   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.351888   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.352047   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:04.352114   32390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:28:04.352158   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.352258   32390 ssh_runner.go:195] Run: cat /version.json
	I0831 22:28:04.352282   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:04.355100   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355471   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.355497   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355518   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355615   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.355822   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.355880   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:04.355906   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:04.355973   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.356052   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:04.356138   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:04.356223   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:04.356358   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:04.356505   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:04.459874   32390 ssh_runner.go:195] Run: systemctl --version
	I0831 22:28:04.466047   32390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:28:04.625348   32390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:28:04.631490   32390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:28:04.631564   32390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:28:04.648534   32390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:28:04.648559   32390 start.go:495] detecting cgroup driver to use...
	I0831 22:28:04.648650   32390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:28:04.666821   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:28:04.680864   32390 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:28:04.680936   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:28:04.695065   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:28:04.709207   32390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:28:04.831827   32390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:28:04.991491   32390 docker.go:233] disabling docker service ...
	I0831 22:28:04.991550   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:28:05.006362   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:28:05.019197   32390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:28:05.142686   32390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:28:05.255408   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:28:05.270969   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:28:05.290387   32390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:28:05.290460   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.301062   32390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:28:05.301145   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.311884   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.322301   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.333290   32390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:28:05.344688   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.356344   32390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.377326   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:05.388494   32390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:28:05.398978   32390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:28:05.399043   32390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:28:05.413376   32390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:28:05.423525   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:05.535870   32390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:28:05.632034   32390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:28:05.632113   32390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:28:05.637232   32390 start.go:563] Will wait 60s for crictl version
	I0831 22:28:05.637289   32390 ssh_runner.go:195] Run: which crictl
	I0831 22:28:05.641234   32390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:28:05.685711   32390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:28:05.685799   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:05.716694   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:05.750305   32390 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:28:05.751458   32390 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:28:05.754007   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:05.754345   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:05.754373   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:05.754564   32390 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:28:05.758880   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:05.772469   32390 kubeadm.go:883] updating cluster {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:28:05.772597   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:28:05.772670   32390 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:28:05.810153   32390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0831 22:28:05.810225   32390 ssh_runner.go:195] Run: which lz4
	I0831 22:28:05.814402   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0831 22:28:05.814517   32390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 22:28:05.818880   32390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 22:28:05.818915   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0831 22:28:07.153042   32390 crio.go:462] duration metric: took 1.338576702s to copy over tarball
	I0831 22:28:07.153109   32390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 22:28:09.169452   32390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.016316735s)
	I0831 22:28:09.169480   32390 crio.go:469] duration metric: took 2.016414434s to extract the tarball
	I0831 22:28:09.169490   32390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 22:28:09.206468   32390 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:28:09.251895   32390 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:28:09.251918   32390 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:28:09.251927   32390 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.31.0 crio true true} ...
	I0831 22:28:09.252050   32390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:28:09.252109   32390 ssh_runner.go:195] Run: crio config
	I0831 22:28:09.300337   32390 cni.go:84] Creating CNI manager for ""
	I0831 22:28:09.300355   32390 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0831 22:28:09.300376   32390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:28:09.300401   32390 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957517 NodeName:ha-957517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:28:09.300516   32390 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-957517"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:28:09.300540   32390 kube-vip.go:115] generating kube-vip config ...
	I0831 22:28:09.300579   32390 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:28:09.318427   32390 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:28:09.318606   32390 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:28:09.318662   32390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:28:09.328707   32390 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:28:09.328775   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 22:28:09.338384   32390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0831 22:28:09.354709   32390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:28:09.370922   32390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0831 22:28:09.387555   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0831 22:28:09.403236   32390 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:28:09.407029   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:09.418828   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:09.544083   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:28:09.561788   32390 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.137
	I0831 22:28:09.561811   32390 certs.go:194] generating shared ca certs ...
	I0831 22:28:09.561830   32390 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:09.562005   32390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:28:09.562071   32390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:28:09.562086   32390 certs.go:256] generating profile certs ...
	I0831 22:28:09.562181   32390 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:28:09.562205   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt with IP's: []
	I0831 22:28:09.805603   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt ...
	I0831 22:28:09.805631   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt: {Name:mk3c85a6e367e84685bb8c9f750a4856c91ffd84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:09.805800   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key ...
	I0831 22:28:09.805818   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key: {Name:mk0b319fe409d802a990382870a94357c6813c0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:09.805891   32390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb
	I0831 22:28:09.805906   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.254]
	I0831 22:28:10.075422   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb ...
	I0831 22:28:10.075459   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb: {Name:mkb0b898c9451ea30d4110b419afe0b46b519093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.075652   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb ...
	I0831 22:28:10.075671   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb: {Name:mk368a6acd117e80f148d343fe5bc16885fa570c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.075770   32390 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.e2b65ffb -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:28:10.075859   32390 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.e2b65ffb -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:28:10.075910   32390 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:28:10.075923   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt with IP's: []
	I0831 22:28:10.260972   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt ...
	I0831 22:28:10.261000   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt: {Name:mkd0e3c1a312c99613f089ee0d75d00d8bc80cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.261194   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key ...
	I0831 22:28:10.261208   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key: {Name:mk478e548843f346c10b2feee222cdac2656123b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:10.261303   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:28:10.261321   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:28:10.261331   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:28:10.261344   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:28:10.261355   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:28:10.261366   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:28:10.261378   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:28:10.261388   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:28:10.261437   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:28:10.261471   32390 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:28:10.261480   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:28:10.261500   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:28:10.261521   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:28:10.261542   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:28:10.261577   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:10.261605   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.261620   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.261632   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.262174   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:28:10.290061   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:28:10.325128   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:28:10.367986   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:28:10.392476   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0831 22:28:10.416507   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:28:10.440851   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:28:10.465514   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:28:10.489898   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:28:10.513771   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:28:10.537008   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:28:10.560113   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:28:10.576451   32390 ssh_runner.go:195] Run: openssl version
	I0831 22:28:10.582089   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:28:10.592778   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.596976   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.597015   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:10.602591   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:28:10.612819   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:28:10.623013   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.627196   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.627234   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:28:10.632686   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:28:10.642922   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:28:10.653108   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.657491   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.657536   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:28:10.663173   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:28:10.673710   32390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:28:10.677849   32390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:28:10.677901   32390 kubeadm.go:392] StartCluster: {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:28:10.677982   32390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:28:10.678033   32390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:28:10.722302   32390 cri.go:89] found id: ""
	I0831 22:28:10.722358   32390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:28:10.732118   32390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:28:10.741461   32390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:28:10.753013   32390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:28:10.753033   32390 kubeadm.go:157] found existing configuration files:
	
	I0831 22:28:10.753082   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:28:10.762969   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:28:10.763029   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:28:10.772864   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:28:10.782644   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:28:10.782699   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:28:10.792127   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:28:10.801207   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:28:10.801265   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:28:10.810482   32390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:28:10.819184   32390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:28:10.819237   32390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:28:10.828649   32390 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 22:28:10.925572   32390 kubeadm.go:310] W0831 22:28:10.899153     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:28:10.928719   32390 kubeadm.go:310] W0831 22:28:10.902326     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:28:11.038797   32390 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:28:22.047996   32390 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:28:22.048073   32390 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:28:22.048184   32390 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:28:22.048314   32390 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:28:22.048434   32390 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:28:22.048528   32390 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:28:22.049992   32390 out.go:235]   - Generating certificates and keys ...
	I0831 22:28:22.050074   32390 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:28:22.050157   32390 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:28:22.050246   32390 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:28:22.050312   32390 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:28:22.050531   32390 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:28:22.050599   32390 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:28:22.050674   32390 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:28:22.050837   32390 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-957517 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0831 22:28:22.050925   32390 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:28:22.051092   32390 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-957517 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0831 22:28:22.051189   32390 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:28:22.051247   32390 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:28:22.051285   32390 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:28:22.051355   32390 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:28:22.051406   32390 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:28:22.051454   32390 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:28:22.051498   32390 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:28:22.051591   32390 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:28:22.051660   32390 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:28:22.051774   32390 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:28:22.051866   32390 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:28:22.053545   32390 out.go:235]   - Booting up control plane ...
	I0831 22:28:22.053636   32390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:28:22.053724   32390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:28:22.053807   32390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:28:22.053929   32390 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:28:22.054024   32390 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:28:22.054063   32390 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:28:22.054166   32390 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:28:22.054251   32390 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:28:22.054299   32390 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.426939ms
	I0831 22:28:22.054360   32390 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:28:22.054412   32390 kubeadm.go:310] [api-check] The API server is healthy after 5.955214171s
	I0831 22:28:22.054501   32390 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:28:22.054606   32390 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:28:22.054654   32390 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:28:22.054810   32390 kubeadm.go:310] [mark-control-plane] Marking the node ha-957517 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:28:22.054895   32390 kubeadm.go:310] [bootstrap-token] Using token: g1v7x3.21whabocm7k8avb9
	I0831 22:28:22.056571   32390 out.go:235]   - Configuring RBAC rules ...
	I0831 22:28:22.056676   32390 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:28:22.056769   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:28:22.056933   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:28:22.057043   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:28:22.057146   32390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:28:22.057223   32390 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:28:22.057315   32390 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:28:22.057356   32390 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:28:22.057400   32390 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:28:22.057406   32390 kubeadm.go:310] 
	I0831 22:28:22.057473   32390 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:28:22.057482   32390 kubeadm.go:310] 
	I0831 22:28:22.057591   32390 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:28:22.057604   32390 kubeadm.go:310] 
	I0831 22:28:22.057645   32390 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:28:22.057730   32390 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:28:22.057802   32390 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:28:22.057809   32390 kubeadm.go:310] 
	I0831 22:28:22.057853   32390 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:28:22.057858   32390 kubeadm.go:310] 
	I0831 22:28:22.057897   32390 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:28:22.057903   32390 kubeadm.go:310] 
	I0831 22:28:22.057948   32390 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:28:22.058015   32390 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:28:22.058080   32390 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:28:22.058086   32390 kubeadm.go:310] 
	I0831 22:28:22.058156   32390 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:28:22.058220   32390 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:28:22.058228   32390 kubeadm.go:310] 
	I0831 22:28:22.058298   32390 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g1v7x3.21whabocm7k8avb9 \
	I0831 22:28:22.058425   32390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e \
	I0831 22:28:22.058460   32390 kubeadm.go:310] 	--control-plane 
	I0831 22:28:22.058466   32390 kubeadm.go:310] 
	I0831 22:28:22.058542   32390 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:28:22.058550   32390 kubeadm.go:310] 
	I0831 22:28:22.058635   32390 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g1v7x3.21whabocm7k8avb9 \
	I0831 22:28:22.058740   32390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e 
	I0831 22:28:22.058763   32390 cni.go:84] Creating CNI manager for ""
	I0831 22:28:22.058772   32390 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0831 22:28:22.060436   32390 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0831 22:28:22.061811   32390 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0831 22:28:22.067545   32390 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0831 22:28:22.067562   32390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0831 22:28:22.087664   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0831 22:28:22.518741   32390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:28:22.518778   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:22.518852   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957517 minikube.k8s.io/updated_at=2024_08_31T22_28_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=ha-957517 minikube.k8s.io/primary=true
	I0831 22:28:22.732258   32390 ops.go:34] apiserver oom_adj: -16
	I0831 22:28:22.732336   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:23.233261   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:23.733264   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:24.232391   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:24.733079   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:25.233160   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:25.732935   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:26.232794   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:28:26.334082   32390 kubeadm.go:1113] duration metric: took 3.81535594s to wait for elevateKubeSystemPrivileges
	I0831 22:28:26.334116   32390 kubeadm.go:394] duration metric: took 15.656216472s to StartCluster
	I0831 22:28:26.334136   32390 settings.go:142] acquiring lock: {Name:mkec6b4f5d3301688503002977bc4d63aab7adcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:26.334225   32390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:28:26.334844   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/kubeconfig: {Name:mkc6d6b60cc62b336d228fe4b49e098aa4d94f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:26.335060   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:28:26.335087   32390 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:28:26.335110   32390 start.go:241] waiting for startup goroutines ...
	I0831 22:28:26.335118   32390 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 22:28:26.335178   32390 addons.go:69] Setting storage-provisioner=true in profile "ha-957517"
	I0831 22:28:26.335188   32390 addons.go:69] Setting default-storageclass=true in profile "ha-957517"
	I0831 22:28:26.335209   32390 addons.go:234] Setting addon storage-provisioner=true in "ha-957517"
	I0831 22:28:26.335217   32390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-957517"
	I0831 22:28:26.335249   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:26.335298   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:26.335614   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.335641   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.335654   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.335685   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.350245   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0831 22:28:26.350634   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.351162   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.351187   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.351527   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.351727   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:26.353759   32390 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:28:26.354122   32390 kapi.go:59] client config for ha-957517: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key", CAFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f192a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 22:28:26.354622   32390 cert_rotation.go:140] Starting client certificate rotation controller
	I0831 22:28:26.354884   32390 addons.go:234] Setting addon default-storageclass=true in "ha-957517"
	I0831 22:28:26.354919   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:26.355016   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I0831 22:28:26.355291   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.355317   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.355396   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.355844   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.355865   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.356199   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.356783   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.356814   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.369948   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I0831 22:28:26.370424   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.370778   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34193
	I0831 22:28:26.370919   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.370938   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.371101   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.371229   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.371503   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.371520   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.371754   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:26.371790   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:26.371855   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.372022   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:26.373948   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:26.376433   32390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:28:26.377866   32390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:28:26.377882   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:28:26.377900   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:26.380588   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.380988   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:26.381022   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.381206   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:26.381381   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:26.381543   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:26.381698   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:26.387026   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0831 22:28:26.387349   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:26.387733   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:26.387755   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:26.388032   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:26.388180   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:26.389338   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:26.389500   32390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:28:26.389513   32390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:28:26.389527   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:26.392386   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.392807   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:26.392834   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:26.392973   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:26.393125   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:26.393297   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:26.393429   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:26.508204   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:28:26.520058   32390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:28:26.529019   32390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:28:26.992273   32390 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0831 22:28:27.172898   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.172918   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.172963   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.172980   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.173231   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173246   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.173253   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.173262   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.173345   32390 main.go:141] libmachine: (ha-957517) DBG | Closing plugin on server side
	I0831 22:28:27.173357   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173366   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.173381   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.173392   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.173461   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173477   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.173537   32390 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 22:28:27.173555   32390 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 22:28:27.173669   32390 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0831 22:28:27.173679   32390 round_trippers.go:469] Request Headers:
	I0831 22:28:27.173691   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:28:27.173688   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.173696   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:28:27.173706   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.185379   32390 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0831 22:28:27.186095   32390 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0831 22:28:27.186109   32390 round_trippers.go:469] Request Headers:
	I0831 22:28:27.186121   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:28:27.186130   32390 round_trippers.go:473]     Content-Type: application/json
	I0831 22:28:27.186136   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:28:27.191616   32390 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 22:28:27.191837   32390 main.go:141] libmachine: Making call to close driver server
	I0831 22:28:27.191853   32390 main.go:141] libmachine: (ha-957517) Calling .Close
	I0831 22:28:27.192084   32390 main.go:141] libmachine: Successfully made call to close driver server
	I0831 22:28:27.192101   32390 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 22:28:27.194185   32390 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0831 22:28:27.195609   32390 addons.go:510] duration metric: took 860.487547ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0831 22:28:27.195646   32390 start.go:246] waiting for cluster config update ...
	I0831 22:28:27.195661   32390 start.go:255] writing updated cluster config ...
	I0831 22:28:27.197202   32390 out.go:201] 
	I0831 22:28:27.198596   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:27.198655   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:27.200253   32390 out.go:177] * Starting "ha-957517-m02" control-plane node in "ha-957517" cluster
	I0831 22:28:27.201593   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:28:27.201611   32390 cache.go:56] Caching tarball of preloaded images
	I0831 22:28:27.201711   32390 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:28:27.201725   32390 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:28:27.201780   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:27.202121   32390 start.go:360] acquireMachinesLock for ha-957517-m02: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:28:27.202167   32390 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "ha-957517-m02"
	I0831 22:28:27.202189   32390 start.go:93] Provisioning new machine with config: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:28:27.202258   32390 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0831 22:28:27.203830   32390 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 22:28:27.203917   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:27.203948   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:27.218470   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0831 22:28:27.218899   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:27.219448   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:27.219466   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:27.219768   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:27.219957   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:27.220099   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:27.220282   32390 start.go:159] libmachine.API.Create for "ha-957517" (driver="kvm2")
	I0831 22:28:27.220303   32390 client.go:168] LocalClient.Create starting
	I0831 22:28:27.220332   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:28:27.220369   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:28:27.220388   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:28:27.220457   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:28:27.220482   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:28:27.220508   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:28:27.220533   32390 main.go:141] libmachine: Running pre-create checks...
	I0831 22:28:27.220544   32390 main.go:141] libmachine: (ha-957517-m02) Calling .PreCreateCheck
	I0831 22:28:27.220699   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetConfigRaw
	I0831 22:28:27.221082   32390 main.go:141] libmachine: Creating machine...
	I0831 22:28:27.221096   32390 main.go:141] libmachine: (ha-957517-m02) Calling .Create
	I0831 22:28:27.221224   32390 main.go:141] libmachine: (ha-957517-m02) Creating KVM machine...
	I0831 22:28:27.222386   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found existing default KVM network
	I0831 22:28:27.222566   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found existing private KVM network mk-ha-957517
	I0831 22:28:27.222708   32390 main.go:141] libmachine: (ha-957517-m02) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02 ...
	I0831 22:28:27.222727   32390 main.go:141] libmachine: (ha-957517-m02) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:28:27.222810   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.222710   32754 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:28:27.222899   32390 main.go:141] libmachine: (ha-957517-m02) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:28:27.464061   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.463924   32754 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa...
	I0831 22:28:27.596673   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.596561   32754 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/ha-957517-m02.rawdisk...
	I0831 22:28:27.596706   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Writing magic tar header
	I0831 22:28:27.596724   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Writing SSH key tar header
	I0831 22:28:27.596736   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:27.596681   32754 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02 ...
	I0831 22:28:27.596840   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02
	I0831 22:28:27.596867   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02 (perms=drwx------)
	I0831 22:28:27.596879   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:28:27.596900   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:28:27.596915   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:28:27.596925   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:28:27.596941   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:28:27.596954   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:28:27.596966   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:28:27.596983   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:28:27.596995   32390 main.go:141] libmachine: (ha-957517-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:28:27.597006   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:28:27.597015   32390 main.go:141] libmachine: (ha-957517-m02) Creating domain...
	I0831 22:28:27.597032   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Checking permissions on dir: /home
	I0831 22:28:27.597043   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Skipping /home - not owner
	I0831 22:28:27.598016   32390 main.go:141] libmachine: (ha-957517-m02) define libvirt domain using xml: 
	I0831 22:28:27.598046   32390 main.go:141] libmachine: (ha-957517-m02) <domain type='kvm'>
	I0831 22:28:27.598057   32390 main.go:141] libmachine: (ha-957517-m02)   <name>ha-957517-m02</name>
	I0831 22:28:27.598071   32390 main.go:141] libmachine: (ha-957517-m02)   <memory unit='MiB'>2200</memory>
	I0831 22:28:27.598083   32390 main.go:141] libmachine: (ha-957517-m02)   <vcpu>2</vcpu>
	I0831 22:28:27.598095   32390 main.go:141] libmachine: (ha-957517-m02)   <features>
	I0831 22:28:27.598104   32390 main.go:141] libmachine: (ha-957517-m02)     <acpi/>
	I0831 22:28:27.598109   32390 main.go:141] libmachine: (ha-957517-m02)     <apic/>
	I0831 22:28:27.598114   32390 main.go:141] libmachine: (ha-957517-m02)     <pae/>
	I0831 22:28:27.598119   32390 main.go:141] libmachine: (ha-957517-m02)     
	I0831 22:28:27.598124   32390 main.go:141] libmachine: (ha-957517-m02)   </features>
	I0831 22:28:27.598130   32390 main.go:141] libmachine: (ha-957517-m02)   <cpu mode='host-passthrough'>
	I0831 22:28:27.598140   32390 main.go:141] libmachine: (ha-957517-m02)   
	I0831 22:28:27.598151   32390 main.go:141] libmachine: (ha-957517-m02)   </cpu>
	I0831 22:28:27.598156   32390 main.go:141] libmachine: (ha-957517-m02)   <os>
	I0831 22:28:27.598161   32390 main.go:141] libmachine: (ha-957517-m02)     <type>hvm</type>
	I0831 22:28:27.598166   32390 main.go:141] libmachine: (ha-957517-m02)     <boot dev='cdrom'/>
	I0831 22:28:27.598170   32390 main.go:141] libmachine: (ha-957517-m02)     <boot dev='hd'/>
	I0831 22:28:27.598176   32390 main.go:141] libmachine: (ha-957517-m02)     <bootmenu enable='no'/>
	I0831 22:28:27.598179   32390 main.go:141] libmachine: (ha-957517-m02)   </os>
	I0831 22:28:27.598187   32390 main.go:141] libmachine: (ha-957517-m02)   <devices>
	I0831 22:28:27.598192   32390 main.go:141] libmachine: (ha-957517-m02)     <disk type='file' device='cdrom'>
	I0831 22:28:27.598203   32390 main.go:141] libmachine: (ha-957517-m02)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/boot2docker.iso'/>
	I0831 22:28:27.598210   32390 main.go:141] libmachine: (ha-957517-m02)       <target dev='hdc' bus='scsi'/>
	I0831 22:28:27.598217   32390 main.go:141] libmachine: (ha-957517-m02)       <readonly/>
	I0831 22:28:27.598224   32390 main.go:141] libmachine: (ha-957517-m02)     </disk>
	I0831 22:28:27.598235   32390 main.go:141] libmachine: (ha-957517-m02)     <disk type='file' device='disk'>
	I0831 22:28:27.598244   32390 main.go:141] libmachine: (ha-957517-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:28:27.598257   32390 main.go:141] libmachine: (ha-957517-m02)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/ha-957517-m02.rawdisk'/>
	I0831 22:28:27.598268   32390 main.go:141] libmachine: (ha-957517-m02)       <target dev='hda' bus='virtio'/>
	I0831 22:28:27.598276   32390 main.go:141] libmachine: (ha-957517-m02)     </disk>
	I0831 22:28:27.598286   32390 main.go:141] libmachine: (ha-957517-m02)     <interface type='network'>
	I0831 22:28:27.598311   32390 main.go:141] libmachine: (ha-957517-m02)       <source network='mk-ha-957517'/>
	I0831 22:28:27.598334   32390 main.go:141] libmachine: (ha-957517-m02)       <model type='virtio'/>
	I0831 22:28:27.598346   32390 main.go:141] libmachine: (ha-957517-m02)     </interface>
	I0831 22:28:27.598357   32390 main.go:141] libmachine: (ha-957517-m02)     <interface type='network'>
	I0831 22:28:27.598368   32390 main.go:141] libmachine: (ha-957517-m02)       <source network='default'/>
	I0831 22:28:27.598379   32390 main.go:141] libmachine: (ha-957517-m02)       <model type='virtio'/>
	I0831 22:28:27.598390   32390 main.go:141] libmachine: (ha-957517-m02)     </interface>
	I0831 22:28:27.598400   32390 main.go:141] libmachine: (ha-957517-m02)     <serial type='pty'>
	I0831 22:28:27.598433   32390 main.go:141] libmachine: (ha-957517-m02)       <target port='0'/>
	I0831 22:28:27.598455   32390 main.go:141] libmachine: (ha-957517-m02)     </serial>
	I0831 22:28:27.598469   32390 main.go:141] libmachine: (ha-957517-m02)     <console type='pty'>
	I0831 22:28:27.598483   32390 main.go:141] libmachine: (ha-957517-m02)       <target type='serial' port='0'/>
	I0831 22:28:27.598497   32390 main.go:141] libmachine: (ha-957517-m02)     </console>
	I0831 22:28:27.598511   32390 main.go:141] libmachine: (ha-957517-m02)     <rng model='virtio'>
	I0831 22:28:27.598526   32390 main.go:141] libmachine: (ha-957517-m02)       <backend model='random'>/dev/random</backend>
	I0831 22:28:27.598537   32390 main.go:141] libmachine: (ha-957517-m02)     </rng>
	I0831 22:28:27.598546   32390 main.go:141] libmachine: (ha-957517-m02)     
	I0831 22:28:27.598561   32390 main.go:141] libmachine: (ha-957517-m02)     
	I0831 22:28:27.598573   32390 main.go:141] libmachine: (ha-957517-m02)   </devices>
	I0831 22:28:27.598583   32390 main.go:141] libmachine: (ha-957517-m02) </domain>
	I0831 22:28:27.598595   32390 main.go:141] libmachine: (ha-957517-m02) 
	I0831 22:28:27.606046   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:77:f7:02 in network default
	I0831 22:28:27.606628   32390 main.go:141] libmachine: (ha-957517-m02) Ensuring networks are active...
	I0831 22:28:27.606653   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:27.607321   32390 main.go:141] libmachine: (ha-957517-m02) Ensuring network default is active
	I0831 22:28:27.607692   32390 main.go:141] libmachine: (ha-957517-m02) Ensuring network mk-ha-957517 is active
	I0831 22:28:27.608118   32390 main.go:141] libmachine: (ha-957517-m02) Getting domain xml...
	I0831 22:28:27.608771   32390 main.go:141] libmachine: (ha-957517-m02) Creating domain...
	I0831 22:28:28.787010   32390 main.go:141] libmachine: (ha-957517-m02) Waiting to get IP...
	I0831 22:28:28.787751   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:28.788105   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:28.788128   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:28.788090   32754 retry.go:31] will retry after 243.362281ms: waiting for machine to come up
	I0831 22:28:29.033610   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:29.034078   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:29.034096   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:29.034050   32754 retry.go:31] will retry after 243.613799ms: waiting for machine to come up
	I0831 22:28:29.279508   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:29.279930   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:29.279969   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:29.279892   32754 retry.go:31] will retry after 359.068943ms: waiting for machine to come up
	I0831 22:28:29.641640   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:29.642053   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:29.642074   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:29.642015   32754 retry.go:31] will retry after 517.837365ms: waiting for machine to come up
	I0831 22:28:30.161608   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:30.162039   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:30.162069   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:30.161994   32754 retry.go:31] will retry after 556.118435ms: waiting for machine to come up
	I0831 22:28:30.719681   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:30.720157   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:30.720186   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:30.720091   32754 retry.go:31] will retry after 830.853012ms: waiting for machine to come up
	I0831 22:28:31.552034   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:31.552488   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:31.552519   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:31.552440   32754 retry.go:31] will retry after 1.186910615s: waiting for machine to come up
	I0831 22:28:32.740382   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:32.740794   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:32.740815   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:32.740769   32754 retry.go:31] will retry after 1.401520174s: waiting for machine to come up
	I0831 22:28:34.144309   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:34.144770   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:34.144797   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:34.144733   32754 retry.go:31] will retry after 1.316598575s: waiting for machine to come up
	I0831 22:28:35.463142   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:35.463557   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:35.463590   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:35.463507   32754 retry.go:31] will retry after 2.182834787s: waiting for machine to come up
	I0831 22:28:37.648250   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:37.648795   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:37.648823   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:37.648745   32754 retry.go:31] will retry after 2.150253237s: waiting for machine to come up
	I0831 22:28:39.800341   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:39.800795   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:39.800816   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:39.800763   32754 retry.go:31] will retry after 2.340318676s: waiting for machine to come up
	I0831 22:28:42.142343   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:42.142784   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:42.142816   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:42.142730   32754 retry.go:31] will retry after 3.297096591s: waiting for machine to come up
	I0831 22:28:45.441400   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:45.441730   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find current IP address of domain ha-957517-m02 in network mk-ha-957517
	I0831 22:28:45.441752   32390 main.go:141] libmachine: (ha-957517-m02) DBG | I0831 22:28:45.441682   32754 retry.go:31] will retry after 5.294406767s: waiting for machine to come up
	I0831 22:28:50.739962   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.740377   32390 main.go:141] libmachine: (ha-957517-m02) Found IP for machine: 192.168.39.61
	I0831 22:28:50.740407   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has current primary IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.740416   32390 main.go:141] libmachine: (ha-957517-m02) Reserving static IP address...
	I0831 22:28:50.740741   32390 main.go:141] libmachine: (ha-957517-m02) DBG | unable to find host DHCP lease matching {name: "ha-957517-m02", mac: "52:54:00:d0:a3:98", ip: "192.168.39.61"} in network mk-ha-957517
	I0831 22:28:50.811691   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Getting to WaitForSSH function...
	I0831 22:28:50.811722   32390 main.go:141] libmachine: (ha-957517-m02) Reserved static IP address: 192.168.39.61
	I0831 22:28:50.811735   32390 main.go:141] libmachine: (ha-957517-m02) Waiting for SSH to be available...
	I0831 22:28:50.814182   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.814543   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:50.814561   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.814759   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Using SSH client type: external
	I0831 22:28:50.814784   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa (-rw-------)
	I0831 22:28:50.814814   32390 main.go:141] libmachine: (ha-957517-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:28:50.814831   32390 main.go:141] libmachine: (ha-957517-m02) DBG | About to run SSH command:
	I0831 22:28:50.814846   32390 main.go:141] libmachine: (ha-957517-m02) DBG | exit 0
	I0831 22:28:50.943179   32390 main.go:141] libmachine: (ha-957517-m02) DBG | SSH cmd err, output: <nil>: 
	I0831 22:28:50.943449   32390 main.go:141] libmachine: (ha-957517-m02) KVM machine creation complete!
	I0831 22:28:50.943801   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetConfigRaw
	I0831 22:28:50.944338   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:50.944529   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:50.944697   32390 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:28:50.944710   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:28:50.945995   32390 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:28:50.946011   32390 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:28:50.946017   32390 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:28:50.946023   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:50.948383   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.948775   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:50.948801   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:50.948948   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:50.949120   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:50.949270   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:50.949392   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:50.949575   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:50.949780   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:50.949793   32390 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:28:51.058642   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:51.058665   32390 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:28:51.058676   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.061589   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.061990   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.062011   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.062214   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.062391   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.062559   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.062704   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.062875   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.063065   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.063077   32390 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:28:51.171729   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:28:51.171805   32390 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:28:51.171815   32390 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:28:51.171824   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:51.172081   32390 buildroot.go:166] provisioning hostname "ha-957517-m02"
	I0831 22:28:51.172110   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:51.172298   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.174636   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.174937   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.174963   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.175084   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.175367   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.175620   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.175770   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.175940   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.176103   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.176115   32390 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517-m02 && echo "ha-957517-m02" | sudo tee /etc/hostname
	I0831 22:28:51.297517   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517-m02
	
	I0831 22:28:51.297541   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.300104   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.300437   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.300460   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.300605   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.300778   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.300923   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.301019   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.301206   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.301364   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.301380   32390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:28:51.420951   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:28:51.420978   32390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:28:51.420996   32390 buildroot.go:174] setting up certificates
	I0831 22:28:51.421010   32390 provision.go:84] configureAuth start
	I0831 22:28:51.421022   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetMachineName
	I0831 22:28:51.421294   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:51.423809   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.424172   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.424196   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.424326   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.426435   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.426694   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.426706   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.426861   32390 provision.go:143] copyHostCerts
	I0831 22:28:51.426886   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:51.426923   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:28:51.426932   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:28:51.427012   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:28:51.427136   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:51.427415   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:28:51.427434   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:28:51.427479   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:28:51.427643   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:51.427666   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:28:51.427672   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:28:51.427705   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:28:51.427790   32390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517-m02 san=[127.0.0.1 192.168.39.61 ha-957517-m02 localhost minikube]
	I0831 22:28:51.541189   32390 provision.go:177] copyRemoteCerts
	I0831 22:28:51.541254   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:28:51.541284   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.544087   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.544393   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.544418   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.544657   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.544882   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.545038   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.545186   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:51.629304   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:28:51.629365   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:28:51.654038   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:28:51.654101   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:28:51.678394   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:28:51.678465   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:28:51.701780   32390 provision.go:87] duration metric: took 280.752455ms to configureAuth
	I0831 22:28:51.701807   32390 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:28:51.702001   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:51.702090   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.704677   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.705020   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.705047   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.705250   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.705424   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.705583   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.705740   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.705916   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:51.706060   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:51.706074   32390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:28:51.929211   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:28:51.929239   32390 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:28:51.929248   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetURL
	I0831 22:28:51.930425   32390 main.go:141] libmachine: (ha-957517-m02) DBG | Using libvirt version 6000000
	I0831 22:28:51.932552   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.932820   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.932850   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.932962   32390 main.go:141] libmachine: Docker is up and running!
	I0831 22:28:51.932978   32390 main.go:141] libmachine: Reticulating splines...
	I0831 22:28:51.932986   32390 client.go:171] duration metric: took 24.7126751s to LocalClient.Create
	I0831 22:28:51.933009   32390 start.go:167] duration metric: took 24.71272858s to libmachine.API.Create "ha-957517"
	I0831 22:28:51.933020   32390 start.go:293] postStartSetup for "ha-957517-m02" (driver="kvm2")
	I0831 22:28:51.933029   32390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:28:51.933044   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:51.933279   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:28:51.933303   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:51.935189   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.935479   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:51.935507   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:51.935649   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:51.935796   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:51.935948   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:51.936037   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:52.021581   32390 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:28:52.026029   32390 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:28:52.026052   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:28:52.026177   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:28:52.026304   32390 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:28:52.026317   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:28:52.026427   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:28:52.036192   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:52.062145   32390 start.go:296] duration metric: took 129.114548ms for postStartSetup
	I0831 22:28:52.062184   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetConfigRaw
	I0831 22:28:52.062691   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:52.065694   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.066141   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.066168   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.066459   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:28:52.066633   32390 start.go:128] duration metric: took 24.864364924s to createHost
	I0831 22:28:52.066652   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:52.068944   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.069321   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.069350   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.069533   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:52.069755   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.069924   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.070092   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:52.070283   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:28:52.070504   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0831 22:28:52.070520   32390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:28:52.184296   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143332.160271398
	
	I0831 22:28:52.184316   32390 fix.go:216] guest clock: 1725143332.160271398
	I0831 22:28:52.184322   32390 fix.go:229] Guest: 2024-08-31 22:28:52.160271398 +0000 UTC Remote: 2024-08-31 22:28:52.066642729 +0000 UTC m=+71.155408944 (delta=93.628669ms)
	I0831 22:28:52.184336   32390 fix.go:200] guest clock delta is within tolerance: 93.628669ms
	I0831 22:28:52.184340   32390 start.go:83] releasing machines lock for "ha-957517-m02", held for 24.982161706s
	I0831 22:28:52.184355   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.184586   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:52.187347   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.187705   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.187725   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.189995   32390 out.go:177] * Found network options:
	I0831 22:28:52.191454   32390 out.go:177]   - NO_PROXY=192.168.39.137
	W0831 22:28:52.192882   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:28:52.192907   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.193396   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.193585   32390 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:28:52.193695   32390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:28:52.193732   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	W0831 22:28:52.193825   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:28:52.193881   32390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:28:52.193897   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:28:52.196379   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.196622   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.196690   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.196713   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.196823   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:52.196986   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.197143   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:52.197160   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:52.197175   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:52.197270   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:28:52.197342   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:52.197441   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:28:52.197579   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:28:52.197698   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:28:52.440877   32390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:28:52.447656   32390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:28:52.447717   32390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:28:52.464132   32390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:28:52.464153   32390 start.go:495] detecting cgroup driver to use...
	I0831 22:28:52.464210   32390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:28:52.481918   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:28:52.495851   32390 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:28:52.495906   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:28:52.509527   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:28:52.522517   32390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:28:52.638789   32390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:28:52.796162   32390 docker.go:233] disabling docker service ...
	I0831 22:28:52.796229   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:28:52.810377   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:28:52.823253   32390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:28:52.934707   32390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:28:53.047800   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:28:53.063463   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:28:53.081704   32390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:28:53.081764   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.091965   32390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:28:53.092024   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.102695   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.114994   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.126800   32390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:28:53.137222   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.147123   32390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.164244   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:28:53.173764   32390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:28:53.182563   32390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:28:53.182608   32390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:28:53.194444   32390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:28:53.203288   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:53.314804   32390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:28:53.414716   32390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:28:53.414790   32390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:28:53.419842   32390 start.go:563] Will wait 60s for crictl version
	I0831 22:28:53.419894   32390 ssh_runner.go:195] Run: which crictl
	I0831 22:28:53.423434   32390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:28:53.458924   32390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:28:53.458999   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:53.486355   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:28:53.514411   32390 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:28:53.515852   32390 out.go:177]   - env NO_PROXY=192.168.39.137
	I0831 22:28:53.517106   32390 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:28:53.519492   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:53.519912   32390 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:28:41 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:28:53.519934   32390 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:28:53.520098   32390 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:28:53.523933   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:53.535737   32390 mustload.go:65] Loading cluster: ha-957517
	I0831 22:28:53.535907   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:28:53.536140   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:53.536182   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:53.550774   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0831 22:28:53.551178   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:53.551600   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:53.551621   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:53.551893   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:53.552045   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:28:53.553598   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:53.553888   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:53.553932   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:53.568671   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0831 22:28:53.569210   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:53.569685   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:53.569708   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:53.570070   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:53.570277   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:53.570462   32390 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.61
	I0831 22:28:53.570472   32390 certs.go:194] generating shared ca certs ...
	I0831 22:28:53.570489   32390 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:53.570633   32390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:28:53.570683   32390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:28:53.570694   32390 certs.go:256] generating profile certs ...
	I0831 22:28:53.570778   32390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:28:53.570809   32390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2
	I0831 22:28:53.570827   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.254]
	I0831 22:28:53.710539   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2 ...
	I0831 22:28:53.710563   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2: {Name:mk538af76639062ba338a47a4d807743b9ff5577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:53.710720   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2 ...
	I0831 22:28:53.710733   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2: {Name:mk009c0022cdeda046304ef0899ed335a9aeb360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:28:53.710799   32390 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.ac058aa2 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:28:53.710920   32390 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.ac058aa2 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:28:53.711037   32390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:28:53.711051   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:28:53.711063   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:28:53.711077   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:28:53.711090   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:28:53.711102   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:28:53.711114   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:28:53.711126   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:28:53.711137   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:28:53.711181   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:28:53.711208   32390 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:28:53.711219   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:28:53.711242   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:28:53.711261   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:28:53.711283   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:28:53.711319   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:28:53.711365   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:28:53.711379   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:53.711392   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:28:53.711422   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:53.714506   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:53.714911   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:53.714938   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:53.715082   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:53.715321   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:53.715480   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:53.715587   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:53.783633   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0831 22:28:53.788356   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0831 22:28:53.799627   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0831 22:28:53.804433   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0831 22:28:53.816674   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0831 22:28:53.821486   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0831 22:28:53.832363   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0831 22:28:53.837054   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0831 22:28:53.852852   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0831 22:28:53.857503   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0831 22:28:53.868022   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0831 22:28:53.872537   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0831 22:28:53.884015   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:28:53.909663   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:28:53.933495   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:28:53.957903   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:28:53.981855   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0831 22:28:54.005508   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:28:54.029675   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:28:54.053280   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:28:54.076641   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:28:54.101006   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:28:54.124523   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:28:54.147377   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0831 22:28:54.163427   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0831 22:28:54.179408   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0831 22:28:54.195690   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0831 22:28:54.211905   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0831 22:28:54.228975   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0831 22:28:54.245786   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0831 22:28:54.263032   32390 ssh_runner.go:195] Run: openssl version
	I0831 22:28:54.268756   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:28:54.279736   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:28:54.284315   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:28:54.284363   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:28:54.290270   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:28:54.300756   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:28:54.311469   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:54.315809   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:54.315871   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:28:54.321315   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:28:54.331911   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:28:54.342943   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:28:54.347210   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:28:54.347254   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:28:54.352716   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:28:54.363082   32390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:28:54.366986   32390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:28:54.367033   32390 kubeadm.go:934] updating node {m02 192.168.39.61 8443 v1.31.0 crio true true} ...
	I0831 22:28:54.367114   32390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:28:54.367144   32390 kube-vip.go:115] generating kube-vip config ...
	I0831 22:28:54.367184   32390 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:28:54.382329   32390 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:28:54.382415   32390 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:28:54.382474   32390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:28:54.394131   32390 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0831 22:28:54.394185   32390 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0831 22:28:54.405229   32390 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0831 22:28:54.405261   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0831 22:28:54.405293   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:28:54.405268   32390 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0831 22:28:54.405389   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:28:54.409887   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0831 22:28:54.409911   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0831 22:28:55.821700   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:28:55.836367   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:28:55.836466   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:28:55.841577   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0831 22:28:55.841617   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0831 22:28:58.430088   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:28:58.430192   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:28:58.434955   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0831 22:28:58.434987   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0831 22:28:58.692031   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0831 22:28:58.701402   32390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:28:58.718672   32390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:28:58.734906   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:28:58.751100   32390 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:28:58.754853   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:28:58.766760   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:28:58.896996   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:28:58.914840   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:28:58.915281   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:28:58.915350   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:28:58.931278   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0831 22:28:58.931717   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:28:58.932301   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:28:58.932328   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:28:58.932622   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:28:58.932846   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:28:58.933002   32390 start.go:317] joinCluster: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:28:58.933127   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0831 22:28:58.933151   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:28:58.935853   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:58.936229   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:28:58.936260   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:28:58.936392   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:28:58.936571   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:28:58.936735   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:28:58.936856   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:28:59.097280   32390 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:28:59.097330   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vcedyv.uh9p93wlnbwgapwi --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m02 --control-plane --apiserver-advertise-address=192.168.39.61 --apiserver-bind-port=8443"
	I0831 22:29:19.350134   32390 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vcedyv.uh9p93wlnbwgapwi --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m02 --control-plane --apiserver-advertise-address=192.168.39.61 --apiserver-bind-port=8443": (20.252769056s)
	I0831 22:29:19.350173   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0831 22:29:19.758953   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957517-m02 minikube.k8s.io/updated_at=2024_08_31T22_29_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=ha-957517 minikube.k8s.io/primary=false
	I0831 22:29:19.880353   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957517-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0831 22:29:20.003429   32390 start.go:319] duration metric: took 21.070424201s to joinCluster
	I0831 22:29:20.003509   32390 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:29:20.003771   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:29:20.004872   32390 out.go:177] * Verifying Kubernetes components...
	I0831 22:29:20.006079   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:29:20.341264   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:29:20.410688   32390 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:29:20.410894   32390 kapi.go:59] client config for ha-957517: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key", CAFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f192a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 22:29:20.410949   32390 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.137:8443
	I0831 22:29:20.411138   32390 node_ready.go:35] waiting up to 6m0s for node "ha-957517-m02" to be "Ready" ...
	I0831 22:29:20.411220   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:20.411227   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:20.411234   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:20.411239   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:20.423777   32390 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0831 22:29:20.911667   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:20.911693   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:20.911702   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:20.911708   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:20.921243   32390 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0831 22:29:21.412056   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:21.412073   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:21.412082   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:21.412085   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:21.416001   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:21.912280   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:21.912305   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:21.912316   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:21.912323   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:21.915704   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:22.411589   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:22.411608   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:22.411616   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:22.411622   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:22.414601   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:22.415221   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:22.911525   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:22.911546   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:22.911554   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:22.911559   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:22.915237   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:23.411475   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:23.411496   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:23.411504   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:23.411510   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:23.415282   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:23.912278   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:23.912302   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:23.912313   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:23.912321   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:23.915933   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:24.412277   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:24.412303   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:24.412315   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:24.412319   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:24.415947   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:24.416488   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:24.911942   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:24.911967   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:24.911978   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:24.911985   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:24.915879   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:25.412038   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:25.412068   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:25.412079   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:25.412085   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:25.415941   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:25.912318   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:25.912339   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:25.912347   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:25.912352   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:25.915698   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:26.411682   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:26.411703   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:26.411713   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:26.411720   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:26.415139   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:26.911444   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:26.911473   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:26.911483   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:26.911489   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:26.914977   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:26.915831   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:27.412252   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:27.412272   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:27.412280   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:27.412284   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:27.557980   32390 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I0831 22:29:27.912255   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:27.912282   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:27.912292   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:27.912296   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:27.915720   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:28.411502   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:28.411530   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:28.411542   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:28.411549   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:28.415301   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:28.912121   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:28.912150   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:28.912160   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:28.912166   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:28.915450   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:28.916479   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:29.412345   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:29.412367   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:29.412378   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:29.412384   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:29.416248   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:29.911417   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:29.911440   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:29.911448   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:29.911453   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:29.914597   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:30.411685   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:30.411706   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:30.411717   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:30.411721   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:30.414912   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:30.911979   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:30.912005   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:30.912015   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:30.912022   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:30.915304   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:31.412093   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:31.412121   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:31.412137   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:31.412142   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:31.415509   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:31.416030   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:31.911484   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:31.911513   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:31.911524   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:31.911529   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:31.915114   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:32.411662   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:32.411685   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:32.411693   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:32.411696   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:32.415131   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:32.912217   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:32.912237   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:32.912245   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:32.912251   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:32.915718   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:33.411633   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:33.411656   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:33.411667   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:33.411673   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:33.414773   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:33.911723   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:33.911741   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:33.911749   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:33.911753   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:33.914906   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:33.915623   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:34.411358   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:34.411379   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:34.411390   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:34.411394   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:34.415548   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:29:34.911534   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:34.911563   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:34.911573   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:34.911581   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:34.914824   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:35.412127   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:35.412158   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:35.412169   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:35.412175   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:35.415546   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:35.911393   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:35.911416   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:35.911426   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:35.911433   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:35.914627   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:36.411478   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:36.411499   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:36.411507   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:36.411511   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:36.414430   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:36.414833   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:36.912255   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:36.912278   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:36.912287   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:36.912292   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:36.915979   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:37.411305   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:37.411341   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:37.411354   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:37.411361   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:37.415106   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:37.912299   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:37.912324   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:37.912333   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:37.912338   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:37.916027   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:38.412221   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:38.412260   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:38.412272   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:38.412278   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:38.417429   32390 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 22:29:38.417935   32390 node_ready.go:53] node "ha-957517-m02" has status "Ready":"False"
	I0831 22:29:38.912150   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:38.912177   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:38.912188   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:38.912195   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:38.915847   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:39.411431   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:39.411456   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:39.411468   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:39.411476   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:39.414700   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:39.911707   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:39.911727   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:39.911735   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:39.911738   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:39.915458   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.411303   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:40.411336   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.411347   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.411352   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.414737   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.415173   32390 node_ready.go:49] node "ha-957517-m02" has status "Ready":"True"
	I0831 22:29:40.415189   32390 node_ready.go:38] duration metric: took 20.004037422s for node "ha-957517-m02" to be "Ready" ...
	I0831 22:29:40.415197   32390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:29:40.415262   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:40.415271   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.415276   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.415282   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.419154   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.425091   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.425161   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-k7rsc
	I0831 22:29:40.425172   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.425179   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.425184   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.428475   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.429393   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.429406   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.429412   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.429418   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.431707   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.432466   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.432482   32390 pod_ready.go:82] duration metric: took 7.368991ms for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.432490   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.432542   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-pc7gn
	I0831 22:29:40.432551   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.432557   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.432562   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.434825   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.435308   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.435321   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.435347   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.435351   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.437687   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.438190   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.438212   32390 pod_ready.go:82] duration metric: took 5.714169ms for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.438223   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.438280   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517
	I0831 22:29:40.438291   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.438300   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.438309   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.440974   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.441951   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.441965   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.441972   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.441975   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.444856   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.445439   32390 pod_ready.go:93] pod "etcd-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.445460   32390 pod_ready.go:82] duration metric: took 7.229121ms for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.445473   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.445536   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m02
	I0831 22:29:40.445546   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.445555   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.445564   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.447802   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.448512   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:40.448529   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.448539   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.448544   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.450706   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:40.451250   32390 pod_ready.go:93] pod "etcd-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.451269   32390 pod_ready.go:82] duration metric: took 5.788447ms for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.451288   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.611665   32390 request.go:632] Waited for 160.321918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:29:40.611739   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:29:40.611748   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.611764   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.611768   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.615193   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.812257   32390 request.go:632] Waited for 196.336667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.812324   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:40.812330   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:40.812337   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:40.812341   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:40.816056   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:40.816752   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:40.816783   32390 pod_ready.go:82] duration metric: took 365.483332ms for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:40.816797   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.012258   32390 request.go:632] Waited for 195.392394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:29:41.012309   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:29:41.012327   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.012339   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.012345   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.015816   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.212008   32390 request.go:632] Waited for 195.320702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:41.212064   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:41.212069   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.212076   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.212081   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.215234   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.215969   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:41.215984   32390 pod_ready.go:82] duration metric: took 399.177722ms for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.215993   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.412160   32390 request.go:632] Waited for 196.097047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:29:41.412222   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:29:41.412228   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.412235   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.412239   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.415704   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.611676   32390 request.go:632] Waited for 195.374996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:41.611726   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:41.611731   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.611738   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.611742   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.615175   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:41.615766   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:41.615783   32390 pod_ready.go:82] duration metric: took 399.784074ms for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.615793   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:41.811944   32390 request.go:632] Waited for 196.095531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:29:41.812016   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:29:41.812023   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:41.812033   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:41.812039   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:41.815339   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.011433   32390 request.go:632] Waited for 195.308047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.011488   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.011493   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.011501   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.011504   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.015258   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.015820   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:42.015837   32390 pod_ready.go:82] duration metric: took 400.038293ms for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.015847   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.211981   32390 request.go:632] Waited for 196.066436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:29:42.212063   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:29:42.212068   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.212078   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.212084   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.215281   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.411392   32390 request.go:632] Waited for 195.419289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.411447   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:42.411454   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.411461   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.411465   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.414825   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.415301   32390 pod_ready.go:93] pod "kube-proxy-dvpbk" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:42.415348   32390 pod_ready.go:82] duration metric: took 399.466629ms for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.415364   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.611375   32390 request.go:632] Waited for 195.917329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:29:42.611433   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:29:42.611441   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.611449   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.611455   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.614965   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:42.812169   32390 request.go:632] Waited for 196.361735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:42.812234   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:42.812241   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:42.812251   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:42.812256   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:42.814686   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:29:42.815209   32390 pod_ready.go:93] pod "kube-proxy-xrp64" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:42.815225   32390 pod_ready.go:82] duration metric: took 399.854298ms for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:42.815234   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.012334   32390 request.go:632] Waited for 197.047061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:29:43.012411   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:29:43.012419   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.012429   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.012439   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.015831   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.211770   32390 request.go:632] Waited for 195.377614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:43.211833   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:29:43.211841   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.211853   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.211858   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.215022   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.215784   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:43.215804   32390 pod_ready.go:82] duration metric: took 400.564003ms for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.215822   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.412010   32390 request.go:632] Waited for 196.11497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:29:43.412066   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:29:43.412071   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.412078   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.412083   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.415261   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.611791   32390 request.go:632] Waited for 195.874911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:43.611872   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:29:43.611879   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.611892   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.611902   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.615561   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:43.616048   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:29:43.616066   32390 pod_ready.go:82] duration metric: took 400.236887ms for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:29:43.616077   32390 pod_ready.go:39] duration metric: took 3.200871491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:29:43.616094   32390 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:29:43.616140   32390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:29:43.632526   32390 api_server.go:72] duration metric: took 23.628979508s to wait for apiserver process to appear ...
	I0831 22:29:43.632555   32390 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:29:43.632576   32390 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0831 22:29:43.637074   32390 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0831 22:29:43.637137   32390 round_trippers.go:463] GET https://192.168.39.137:8443/version
	I0831 22:29:43.637153   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.637160   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.637170   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.638004   32390 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0831 22:29:43.638106   32390 api_server.go:141] control plane version: v1.31.0
	I0831 22:29:43.638124   32390 api_server.go:131] duration metric: took 5.56316ms to wait for apiserver health ...
	I0831 22:29:43.638134   32390 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:29:43.811504   32390 request.go:632] Waited for 173.287765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:43.811593   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:43.811601   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:43.811612   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:43.811620   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:43.817744   32390 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 22:29:43.822317   32390 system_pods.go:59] 17 kube-system pods found
	I0831 22:29:43.822348   32390 system_pods.go:61] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:29:43.822353   32390 system_pods.go:61] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:29:43.822358   32390 system_pods.go:61] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:29:43.822364   32390 system_pods.go:61] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:29:43.822373   32390 system_pods.go:61] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:29:43.822378   32390 system_pods.go:61] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:29:43.822383   32390 system_pods.go:61] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:29:43.822390   32390 system_pods.go:61] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:29:43.822396   32390 system_pods.go:61] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:29:43.822400   32390 system_pods.go:61] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:29:43.822404   32390 system_pods.go:61] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:29:43.822410   32390 system_pods.go:61] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:29:43.822414   32390 system_pods.go:61] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:29:43.822418   32390 system_pods.go:61] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:29:43.822421   32390 system_pods.go:61] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:29:43.822424   32390 system_pods.go:61] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:29:43.822427   32390 system_pods.go:61] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:29:43.822436   32390 system_pods.go:74] duration metric: took 184.288863ms to wait for pod list to return data ...
	I0831 22:29:43.822445   32390 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:29:44.011541   32390 request.go:632] Waited for 189.016326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:29:44.011613   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:29:44.011619   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:44.011626   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:44.011630   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:44.015633   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:44.015890   32390 default_sa.go:45] found service account: "default"
	I0831 22:29:44.015913   32390 default_sa.go:55] duration metric: took 193.460938ms for default service account to be created ...
	I0831 22:29:44.015922   32390 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:29:44.211279   32390 request.go:632] Waited for 195.286649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:44.211381   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:29:44.211388   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:44.211395   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:44.211402   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:44.216223   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:29:44.220704   32390 system_pods.go:86] 17 kube-system pods found
	I0831 22:29:44.220726   32390 system_pods.go:89] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:29:44.220732   32390 system_pods.go:89] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:29:44.220736   32390 system_pods.go:89] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:29:44.220740   32390 system_pods.go:89] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:29:44.220744   32390 system_pods.go:89] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:29:44.220750   32390 system_pods.go:89] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:29:44.220755   32390 system_pods.go:89] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:29:44.220760   32390 system_pods.go:89] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:29:44.220766   32390 system_pods.go:89] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:29:44.220774   32390 system_pods.go:89] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:29:44.220780   32390 system_pods.go:89] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:29:44.220788   32390 system_pods.go:89] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:29:44.220794   32390 system_pods.go:89] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:29:44.220799   32390 system_pods.go:89] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:29:44.220805   32390 system_pods.go:89] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:29:44.220808   32390 system_pods.go:89] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:29:44.220814   32390 system_pods.go:89] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:29:44.220821   32390 system_pods.go:126] duration metric: took 204.892952ms to wait for k8s-apps to be running ...
	I0831 22:29:44.220830   32390 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:29:44.220880   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:29:44.236893   32390 system_svc.go:56] duration metric: took 16.05511ms WaitForService to wait for kubelet
	I0831 22:29:44.236916   32390 kubeadm.go:582] duration metric: took 24.233376408s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:29:44.236935   32390 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:29:44.412338   32390 request.go:632] Waited for 175.326713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes
	I0831 22:29:44.412418   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes
	I0831 22:29:44.412429   32390 round_trippers.go:469] Request Headers:
	I0831 22:29:44.412437   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:29:44.412442   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:29:44.415996   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:29:44.416895   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:29:44.416923   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:29:44.416947   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:29:44.416955   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:29:44.416961   32390 node_conditions.go:105] duration metric: took 180.022322ms to run NodePressure ...
	I0831 22:29:44.416977   32390 start.go:241] waiting for startup goroutines ...
	I0831 22:29:44.417005   32390 start.go:255] writing updated cluster config ...
	I0831 22:29:44.419190   32390 out.go:201] 
	I0831 22:29:44.420858   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:29:44.420943   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:29:44.422660   32390 out.go:177] * Starting "ha-957517-m03" control-plane node in "ha-957517" cluster
	I0831 22:29:44.423897   32390 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:29:44.423921   32390 cache.go:56] Caching tarball of preloaded images
	I0831 22:29:44.424026   32390 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:29:44.424037   32390 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:29:44.424145   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:29:44.424311   32390 start.go:360] acquireMachinesLock for ha-957517-m03: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:29:44.424354   32390 start.go:364] duration metric: took 24.425µs to acquireMachinesLock for "ha-957517-m03"
	I0831 22:29:44.424367   32390 start.go:93] Provisioning new machine with config: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:29:44.424457   32390 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0831 22:29:44.426128   32390 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 22:29:44.426221   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:29:44.426255   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:29:44.440856   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0831 22:29:44.441305   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:29:44.441754   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:29:44.441776   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:29:44.442024   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:29:44.442213   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:29:44.442358   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:29:44.442524   32390 start.go:159] libmachine.API.Create for "ha-957517" (driver="kvm2")
	I0831 22:29:44.442552   32390 client.go:168] LocalClient.Create starting
	I0831 22:29:44.442584   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 22:29:44.442620   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:29:44.442644   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:29:44.442708   32390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 22:29:44.442737   32390 main.go:141] libmachine: Decoding PEM data...
	I0831 22:29:44.442754   32390 main.go:141] libmachine: Parsing certificate...
	I0831 22:29:44.442779   32390 main.go:141] libmachine: Running pre-create checks...
	I0831 22:29:44.442791   32390 main.go:141] libmachine: (ha-957517-m03) Calling .PreCreateCheck
	I0831 22:29:44.442939   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetConfigRaw
	I0831 22:29:44.443271   32390 main.go:141] libmachine: Creating machine...
	I0831 22:29:44.443285   32390 main.go:141] libmachine: (ha-957517-m03) Calling .Create
	I0831 22:29:44.443409   32390 main.go:141] libmachine: (ha-957517-m03) Creating KVM machine...
	I0831 22:29:44.444581   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found existing default KVM network
	I0831 22:29:44.444707   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found existing private KVM network mk-ha-957517
	I0831 22:29:44.444803   32390 main.go:141] libmachine: (ha-957517-m03) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03 ...
	I0831 22:29:44.444830   32390 main.go:141] libmachine: (ha-957517-m03) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:29:44.444890   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.444811   33157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:29:44.444984   32390 main.go:141] libmachine: (ha-957517-m03) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 22:29:44.667359   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.667216   33157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa...
	I0831 22:29:44.783983   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.783875   33157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/ha-957517-m03.rawdisk...
	I0831 22:29:44.784016   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Writing magic tar header
	I0831 22:29:44.784034   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Writing SSH key tar header
	I0831 22:29:44.784046   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:44.783987   33157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03 ...
	I0831 22:29:44.784107   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03
	I0831 22:29:44.784135   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 22:29:44.784156   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03 (perms=drwx------)
	I0831 22:29:44.784170   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:29:44.784187   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 22:29:44.784200   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 22:29:44.784215   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 22:29:44.784232   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 22:29:44.784245   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home/jenkins
	I0831 22:29:44.784265   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 22:29:44.784279   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 22:29:44.784295   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Checking permissions on dir: /home
	I0831 22:29:44.784307   32390 main.go:141] libmachine: (ha-957517-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 22:29:44.784323   32390 main.go:141] libmachine: (ha-957517-m03) Creating domain...
	I0831 22:29:44.784339   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Skipping /home - not owner
	I0831 22:29:44.785218   32390 main.go:141] libmachine: (ha-957517-m03) define libvirt domain using xml: 
	I0831 22:29:44.785241   32390 main.go:141] libmachine: (ha-957517-m03) <domain type='kvm'>
	I0831 22:29:44.785249   32390 main.go:141] libmachine: (ha-957517-m03)   <name>ha-957517-m03</name>
	I0831 22:29:44.785257   32390 main.go:141] libmachine: (ha-957517-m03)   <memory unit='MiB'>2200</memory>
	I0831 22:29:44.785264   32390 main.go:141] libmachine: (ha-957517-m03)   <vcpu>2</vcpu>
	I0831 22:29:44.785268   32390 main.go:141] libmachine: (ha-957517-m03)   <features>
	I0831 22:29:44.785273   32390 main.go:141] libmachine: (ha-957517-m03)     <acpi/>
	I0831 22:29:44.785281   32390 main.go:141] libmachine: (ha-957517-m03)     <apic/>
	I0831 22:29:44.785292   32390 main.go:141] libmachine: (ha-957517-m03)     <pae/>
	I0831 22:29:44.785302   32390 main.go:141] libmachine: (ha-957517-m03)     
	I0831 22:29:44.785313   32390 main.go:141] libmachine: (ha-957517-m03)   </features>
	I0831 22:29:44.785323   32390 main.go:141] libmachine: (ha-957517-m03)   <cpu mode='host-passthrough'>
	I0831 22:29:44.785341   32390 main.go:141] libmachine: (ha-957517-m03)   
	I0831 22:29:44.785354   32390 main.go:141] libmachine: (ha-957517-m03)   </cpu>
	I0831 22:29:44.785364   32390 main.go:141] libmachine: (ha-957517-m03)   <os>
	I0831 22:29:44.785375   32390 main.go:141] libmachine: (ha-957517-m03)     <type>hvm</type>
	I0831 22:29:44.785388   32390 main.go:141] libmachine: (ha-957517-m03)     <boot dev='cdrom'/>
	I0831 22:29:44.785398   32390 main.go:141] libmachine: (ha-957517-m03)     <boot dev='hd'/>
	I0831 22:29:44.785410   32390 main.go:141] libmachine: (ha-957517-m03)     <bootmenu enable='no'/>
	I0831 22:29:44.785420   32390 main.go:141] libmachine: (ha-957517-m03)   </os>
	I0831 22:29:44.785448   32390 main.go:141] libmachine: (ha-957517-m03)   <devices>
	I0831 22:29:44.785468   32390 main.go:141] libmachine: (ha-957517-m03)     <disk type='file' device='cdrom'>
	I0831 22:29:44.785478   32390 main.go:141] libmachine: (ha-957517-m03)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/boot2docker.iso'/>
	I0831 22:29:44.785488   32390 main.go:141] libmachine: (ha-957517-m03)       <target dev='hdc' bus='scsi'/>
	I0831 22:29:44.785504   32390 main.go:141] libmachine: (ha-957517-m03)       <readonly/>
	I0831 22:29:44.785520   32390 main.go:141] libmachine: (ha-957517-m03)     </disk>
	I0831 22:29:44.785536   32390 main.go:141] libmachine: (ha-957517-m03)     <disk type='file' device='disk'>
	I0831 22:29:44.785555   32390 main.go:141] libmachine: (ha-957517-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 22:29:44.785572   32390 main.go:141] libmachine: (ha-957517-m03)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/ha-957517-m03.rawdisk'/>
	I0831 22:29:44.785584   32390 main.go:141] libmachine: (ha-957517-m03)       <target dev='hda' bus='virtio'/>
	I0831 22:29:44.785594   32390 main.go:141] libmachine: (ha-957517-m03)     </disk>
	I0831 22:29:44.785605   32390 main.go:141] libmachine: (ha-957517-m03)     <interface type='network'>
	I0831 22:29:44.785618   32390 main.go:141] libmachine: (ha-957517-m03)       <source network='mk-ha-957517'/>
	I0831 22:29:44.785627   32390 main.go:141] libmachine: (ha-957517-m03)       <model type='virtio'/>
	I0831 22:29:44.785640   32390 main.go:141] libmachine: (ha-957517-m03)     </interface>
	I0831 22:29:44.785655   32390 main.go:141] libmachine: (ha-957517-m03)     <interface type='network'>
	I0831 22:29:44.785668   32390 main.go:141] libmachine: (ha-957517-m03)       <source network='default'/>
	I0831 22:29:44.785676   32390 main.go:141] libmachine: (ha-957517-m03)       <model type='virtio'/>
	I0831 22:29:44.785687   32390 main.go:141] libmachine: (ha-957517-m03)     </interface>
	I0831 22:29:44.785696   32390 main.go:141] libmachine: (ha-957517-m03)     <serial type='pty'>
	I0831 22:29:44.785703   32390 main.go:141] libmachine: (ha-957517-m03)       <target port='0'/>
	I0831 22:29:44.785712   32390 main.go:141] libmachine: (ha-957517-m03)     </serial>
	I0831 22:29:44.785723   32390 main.go:141] libmachine: (ha-957517-m03)     <console type='pty'>
	I0831 22:29:44.785738   32390 main.go:141] libmachine: (ha-957517-m03)       <target type='serial' port='0'/>
	I0831 22:29:44.785749   32390 main.go:141] libmachine: (ha-957517-m03)     </console>
	I0831 22:29:44.785759   32390 main.go:141] libmachine: (ha-957517-m03)     <rng model='virtio'>
	I0831 22:29:44.785772   32390 main.go:141] libmachine: (ha-957517-m03)       <backend model='random'>/dev/random</backend>
	I0831 22:29:44.785781   32390 main.go:141] libmachine: (ha-957517-m03)     </rng>
	I0831 22:29:44.785786   32390 main.go:141] libmachine: (ha-957517-m03)     
	I0831 22:29:44.785794   32390 main.go:141] libmachine: (ha-957517-m03)     
	I0831 22:29:44.785803   32390 main.go:141] libmachine: (ha-957517-m03)   </devices>
	I0831 22:29:44.785812   32390 main.go:141] libmachine: (ha-957517-m03) </domain>
	I0831 22:29:44.785826   32390 main.go:141] libmachine: (ha-957517-m03) 
	I0831 22:29:44.792239   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:ee:c9:7b in network default
	I0831 22:29:44.792796   32390 main.go:141] libmachine: (ha-957517-m03) Ensuring networks are active...
	I0831 22:29:44.792815   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:44.793478   32390 main.go:141] libmachine: (ha-957517-m03) Ensuring network default is active
	I0831 22:29:44.793753   32390 main.go:141] libmachine: (ha-957517-m03) Ensuring network mk-ha-957517 is active
	I0831 22:29:44.794247   32390 main.go:141] libmachine: (ha-957517-m03) Getting domain xml...
	I0831 22:29:44.794923   32390 main.go:141] libmachine: (ha-957517-m03) Creating domain...
	I0831 22:29:46.018660   32390 main.go:141] libmachine: (ha-957517-m03) Waiting to get IP...
	I0831 22:29:46.019544   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.019918   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.019975   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.019928   33157 retry.go:31] will retry after 188.471058ms: waiting for machine to come up
	I0831 22:29:46.210289   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.210735   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.210757   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.210710   33157 retry.go:31] will retry after 266.957858ms: waiting for machine to come up
	I0831 22:29:46.479104   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.479524   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.479551   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.479483   33157 retry.go:31] will retry after 455.33176ms: waiting for machine to come up
	I0831 22:29:46.936036   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:46.936572   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:46.936599   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:46.936526   33157 retry.go:31] will retry after 567.079035ms: waiting for machine to come up
	I0831 22:29:47.505211   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:47.505670   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:47.505696   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:47.505633   33157 retry.go:31] will retry after 565.404588ms: waiting for machine to come up
	I0831 22:29:48.072964   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:48.073879   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:48.073907   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:48.073829   33157 retry.go:31] will retry after 901.14711ms: waiting for machine to come up
	I0831 22:29:48.976876   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:48.977333   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:48.977354   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:48.977294   33157 retry.go:31] will retry after 952.500278ms: waiting for machine to come up
	I0831 22:29:49.931405   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:49.931882   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:49.931909   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:49.931847   33157 retry.go:31] will retry after 896.313086ms: waiting for machine to come up
	I0831 22:29:50.829903   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:50.830367   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:50.830392   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:50.830340   33157 retry.go:31] will retry after 1.726862486s: waiting for machine to come up
	I0831 22:29:52.559146   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:52.559587   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:52.559617   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:52.559539   33157 retry.go:31] will retry after 1.792217096s: waiting for machine to come up
	I0831 22:29:54.353025   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:54.353502   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:54.353532   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:54.353446   33157 retry.go:31] will retry after 2.567340298s: waiting for machine to come up
	I0831 22:29:56.922225   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:56.922595   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:56.922629   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:56.922585   33157 retry.go:31] will retry after 3.025143911s: waiting for machine to come up
	I0831 22:29:59.949599   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:29:59.950025   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:29:59.950058   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:29:59.949976   33157 retry.go:31] will retry after 3.145761762s: waiting for machine to come up
	I0831 22:30:03.098803   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:03.099192   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find current IP address of domain ha-957517-m03 in network mk-ha-957517
	I0831 22:30:03.099220   32390 main.go:141] libmachine: (ha-957517-m03) DBG | I0831 22:30:03.099151   33157 retry.go:31] will retry after 5.518514687s: waiting for machine to come up
	I0831 22:30:08.622195   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.622695   32390 main.go:141] libmachine: (ha-957517-m03) Found IP for machine: 192.168.39.26
	I0831 22:30:08.622717   32390 main.go:141] libmachine: (ha-957517-m03) Reserving static IP address...
	I0831 22:30:08.622730   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has current primary IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.623147   32390 main.go:141] libmachine: (ha-957517-m03) DBG | unable to find host DHCP lease matching {name: "ha-957517-m03", mac: "52:54:00:5e:d5:49", ip: "192.168.39.26"} in network mk-ha-957517
	I0831 22:30:08.697760   32390 main.go:141] libmachine: (ha-957517-m03) Reserved static IP address: 192.168.39.26
	I0831 22:30:08.697781   32390 main.go:141] libmachine: (ha-957517-m03) Waiting for SSH to be available...
	I0831 22:30:08.697790   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Getting to WaitForSSH function...
	I0831 22:30:08.700520   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.700975   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:08.701007   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.701091   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Using SSH client type: external
	I0831 22:30:08.701120   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa (-rw-------)
	I0831 22:30:08.701167   32390 main.go:141] libmachine: (ha-957517-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 22:30:08.701188   32390 main.go:141] libmachine: (ha-957517-m03) DBG | About to run SSH command:
	I0831 22:30:08.701210   32390 main.go:141] libmachine: (ha-957517-m03) DBG | exit 0
	I0831 22:30:08.823670   32390 main.go:141] libmachine: (ha-957517-m03) DBG | SSH cmd err, output: <nil>: 
	I0831 22:30:08.823927   32390 main.go:141] libmachine: (ha-957517-m03) KVM machine creation complete!
	I0831 22:30:08.824318   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetConfigRaw
	I0831 22:30:08.824831   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:08.825067   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:08.825241   32390 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 22:30:08.825252   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:30:08.826809   32390 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 22:30:08.826826   32390 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 22:30:08.826834   32390 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 22:30:08.826843   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:08.829136   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.829600   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:08.829626   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.829803   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:08.829963   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.830121   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.830308   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:08.830495   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:08.830754   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:08.830768   32390 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 22:30:08.930973   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:30:08.930995   32390 main.go:141] libmachine: Detecting the provisioner...
	I0831 22:30:08.931004   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:08.933860   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.934206   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:08.934234   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:08.934438   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:08.934624   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.934796   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:08.934921   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:08.935078   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:08.935240   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:08.935251   32390 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 22:30:09.032484   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 22:30:09.032577   32390 main.go:141] libmachine: found compatible host: buildroot
	I0831 22:30:09.032594   32390 main.go:141] libmachine: Provisioning with buildroot...
	I0831 22:30:09.032603   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:30:09.032881   32390 buildroot.go:166] provisioning hostname "ha-957517-m03"
	I0831 22:30:09.032911   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:30:09.033090   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.035689   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.036112   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.036144   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.036296   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.036448   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.036561   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.036658   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.036844   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.037050   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.037067   32390 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517-m03 && echo "ha-957517-m03" | sudo tee /etc/hostname
	I0831 22:30:09.151226   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517-m03
	
	I0831 22:30:09.151259   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.154054   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.154443   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.154473   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.154629   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.154830   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.154991   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.155117   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.155284   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.155488   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.155504   32390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:30:09.265290   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:30:09.265326   32390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:30:09.265347   32390 buildroot.go:174] setting up certificates
	I0831 22:30:09.265357   32390 provision.go:84] configureAuth start
	I0831 22:30:09.265369   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetMachineName
	I0831 22:30:09.265655   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:09.268441   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.268855   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.268890   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.269082   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.271175   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.271490   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.271520   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.271641   32390 provision.go:143] copyHostCerts
	I0831 22:30:09.271677   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:30:09.271720   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:30:09.271737   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:30:09.271809   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:30:09.271888   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:30:09.271907   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:30:09.271914   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:30:09.271940   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:30:09.271985   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:30:09.272001   32390 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:30:09.272007   32390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:30:09.272028   32390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:30:09.272079   32390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517-m03 san=[127.0.0.1 192.168.39.26 ha-957517-m03 localhost minikube]
	I0831 22:30:09.432938   32390 provision.go:177] copyRemoteCerts
	I0831 22:30:09.432994   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:30:09.433016   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.435571   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.435859   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.435890   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.436043   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.436226   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.436365   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.436497   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:09.518347   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:30:09.518435   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:30:09.544191   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:30:09.544280   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:30:09.569902   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:30:09.569978   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:30:09.595340   32390 provision.go:87] duration metric: took 329.950411ms to configureAuth
	I0831 22:30:09.595372   32390 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:30:09.595578   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:30:09.595647   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.598396   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.598877   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.598908   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.599078   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.599276   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.599484   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.599656   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.599788   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.599975   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.599990   32390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:30:09.819547   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:30:09.819575   32390 main.go:141] libmachine: Checking connection to Docker...
	I0831 22:30:09.819585   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetURL
	I0831 22:30:09.820815   32390 main.go:141] libmachine: (ha-957517-m03) DBG | Using libvirt version 6000000
	I0831 22:30:09.823079   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.823462   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.823491   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.823657   32390 main.go:141] libmachine: Docker is up and running!
	I0831 22:30:09.823674   32390 main.go:141] libmachine: Reticulating splines...
	I0831 22:30:09.823683   32390 client.go:171] duration metric: took 25.381122795s to LocalClient.Create
	I0831 22:30:09.823710   32390 start.go:167] duration metric: took 25.381187201s to libmachine.API.Create "ha-957517"
	I0831 22:30:09.823721   32390 start.go:293] postStartSetup for "ha-957517-m03" (driver="kvm2")
	I0831 22:30:09.823736   32390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:30:09.823758   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:09.824025   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:30:09.824052   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.826223   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.826556   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.826583   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.826720   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.826885   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.827040   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.827168   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:09.906472   32390 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:30:09.911007   32390 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:30:09.911034   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:30:09.911104   32390 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:30:09.911213   32390 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:30:09.911225   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:30:09.911357   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:30:09.921606   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:30:09.950196   32390 start.go:296] duration metric: took 126.462079ms for postStartSetup
	I0831 22:30:09.950242   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetConfigRaw
	I0831 22:30:09.950835   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:09.953781   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.954146   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.954183   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.954461   32390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:30:09.954649   32390 start.go:128] duration metric: took 25.530183034s to createHost
	I0831 22:30:09.954673   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:09.956919   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.957196   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:09.957222   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:09.957359   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:09.957506   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.957628   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:09.957773   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:09.957908   32390 main.go:141] libmachine: Using SSH client type: native
	I0831 22:30:09.958077   32390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0831 22:30:09.958086   32390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:30:10.056681   32390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143410.033508490
	
	I0831 22:30:10.056705   32390 fix.go:216] guest clock: 1725143410.033508490
	I0831 22:30:10.056717   32390 fix.go:229] Guest: 2024-08-31 22:30:10.03350849 +0000 UTC Remote: 2024-08-31 22:30:09.954660074 +0000 UTC m=+149.043426289 (delta=78.848416ms)
	I0831 22:30:10.056736   32390 fix.go:200] guest clock delta is within tolerance: 78.848416ms
	I0831 22:30:10.056743   32390 start.go:83] releasing machines lock for "ha-957517-m03", held for 25.63238216s
	I0831 22:30:10.056761   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.057037   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:10.059647   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.060036   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:10.060066   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.062714   32390 out.go:177] * Found network options:
	I0831 22:30:10.064732   32390 out.go:177]   - NO_PROXY=192.168.39.137,192.168.39.61
	W0831 22:30:10.066213   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 22:30:10.066241   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:30:10.066258   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.066963   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.067195   32390 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:30:10.067314   32390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:30:10.067371   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	W0831 22:30:10.067489   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 22:30:10.067517   32390 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 22:30:10.067586   32390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:30:10.067616   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:30:10.070260   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070451   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070620   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:10.070669   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070830   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:10.070851   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:10.070860   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:10.071059   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:10.071093   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:30:10.071250   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:10.071266   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:30:10.071434   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:30:10.071438   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:10.071591   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:30:10.304386   32390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:30:10.310730   32390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:30:10.310802   32390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:30:10.329120   32390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 22:30:10.329151   32390 start.go:495] detecting cgroup driver to use...
	I0831 22:30:10.329223   32390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:30:10.346114   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:30:10.361295   32390 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:30:10.361360   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:30:10.375585   32390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:30:10.389748   32390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:30:10.508832   32390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:30:10.654279   32390 docker.go:233] disabling docker service ...
	I0831 22:30:10.654357   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:30:10.670019   32390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:30:10.684777   32390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:30:10.819832   32390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:30:10.949249   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:30:10.964959   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:30:10.983961   32390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:30:10.984026   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:10.995937   32390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:30:10.996003   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.009572   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.021077   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.032655   32390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:30:11.044442   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.056421   32390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.075569   32390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:30:11.087138   32390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:30:11.098703   32390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 22:30:11.098768   32390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 22:30:11.114721   32390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:30:11.127062   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:30:11.246987   32390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:30:11.340825   32390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:30:11.340901   32390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:30:11.346280   32390 start.go:563] Will wait 60s for crictl version
	I0831 22:30:11.346353   32390 ssh_runner.go:195] Run: which crictl
	I0831 22:30:11.350335   32390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:30:11.390222   32390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:30:11.390311   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:30:11.420458   32390 ssh_runner.go:195] Run: crio --version
	I0831 22:30:11.451574   32390 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:30:11.452841   32390 out.go:177]   - env NO_PROXY=192.168.39.137
	I0831 22:30:11.454238   32390 out.go:177]   - env NO_PROXY=192.168.39.137,192.168.39.61
	I0831 22:30:11.455403   32390 main.go:141] libmachine: (ha-957517-m03) Calling .GetIP
	I0831 22:30:11.458308   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:11.458781   32390 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:30:11.458818   32390 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:30:11.459100   32390 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:30:11.463728   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:30:11.476818   32390 mustload.go:65] Loading cluster: ha-957517
	I0831 22:30:11.477069   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:30:11.477327   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:30:11.477375   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:30:11.492867   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0831 22:30:11.493293   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:30:11.493736   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:30:11.493754   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:30:11.494048   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:30:11.494252   32390 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:30:11.495794   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:30:11.496076   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:30:11.496122   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:30:11.511012   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0831 22:30:11.511448   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:30:11.511933   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:30:11.511956   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:30:11.512264   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:30:11.512460   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:30:11.512631   32390 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.26
	I0831 22:30:11.512643   32390 certs.go:194] generating shared ca certs ...
	I0831 22:30:11.512657   32390 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:30:11.512787   32390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:30:11.512832   32390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:30:11.512841   32390 certs.go:256] generating profile certs ...
	I0831 22:30:11.512908   32390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:30:11.512934   32390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f
	I0831 22:30:11.512947   32390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.26 192.168.39.254]
	I0831 22:30:11.617566   32390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f ...
	I0831 22:30:11.617595   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f: {Name:mkc83f4cd90b98fa20d6a00874dcc873c13e5ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:30:11.617782   32390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f ...
	I0831 22:30:11.617796   32390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f: {Name:mkfc266e41c2031a162953cdbdca61197e3b8aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:30:11.617904   32390 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.cf3c730f -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:30:11.618042   32390 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.cf3c730f -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:30:11.618209   32390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:30:11.618226   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:30:11.618243   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:30:11.618257   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:30:11.618269   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:30:11.618281   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:30:11.618294   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:30:11.618305   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:30:11.618317   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:30:11.618366   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:30:11.618393   32390 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:30:11.618401   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:30:11.618422   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:30:11.618442   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:30:11.618466   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:30:11.618503   32390 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:30:11.618528   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:11.618541   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:30:11.618553   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:30:11.618581   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:30:11.621676   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:11.622055   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:30:11.622079   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:11.622239   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:30:11.622470   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:30:11.622625   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:30:11.622772   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:30:11.699703   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0831 22:30:11.706252   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0831 22:30:11.720239   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0831 22:30:11.724731   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0831 22:30:11.736091   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0831 22:30:11.740441   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0831 22:30:11.750982   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0831 22:30:11.756133   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0831 22:30:11.768201   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0831 22:30:11.772564   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0831 22:30:11.783921   32390 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0831 22:30:11.787891   32390 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0831 22:30:11.799246   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:30:11.826642   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:30:11.855464   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:30:11.884492   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:30:11.912993   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0831 22:30:11.939431   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:30:11.964317   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:30:11.989006   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:30:12.013606   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:30:12.040296   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:30:12.064249   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:30:12.089686   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0831 22:30:12.108965   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0831 22:30:12.127712   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0831 22:30:12.148320   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0831 22:30:12.168568   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0831 22:30:12.187086   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0831 22:30:12.204466   32390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0831 22:30:12.222617   32390 ssh_runner.go:195] Run: openssl version
	I0831 22:30:12.228737   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:30:12.240426   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:30:12.245453   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:30:12.245503   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:30:12.251237   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:30:12.262117   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:30:12.272708   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:30:12.277124   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:30:12.277185   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:30:12.282772   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:30:12.293503   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:30:12.304508   32390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:12.309153   32390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:12.309206   32390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:30:12.322442   32390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:30:12.335035   32390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:30:12.339018   32390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:30:12.339065   32390 kubeadm.go:934] updating node {m03 192.168.39.26 8443 v1.31.0 crio true true} ...
	I0831 22:30:12.339136   32390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:30:12.339164   32390 kube-vip.go:115] generating kube-vip config ...
	I0831 22:30:12.339197   32390 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:30:12.357293   32390 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:30:12.357358   32390 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:30:12.357417   32390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:30:12.366929   32390 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0831 22:30:12.366976   32390 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0831 22:30:12.376334   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0831 22:30:12.376338   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0831 22:30:12.376356   32390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0831 22:30:12.376380   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:30:12.376387   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:30:12.376359   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:30:12.376459   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:30:12.376465   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:30:12.381168   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0831 22:30:12.381189   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0831 22:30:12.404499   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0831 22:30:12.404545   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0831 22:30:12.404589   32390 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:30:12.404694   32390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:30:12.450536   32390 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0831 22:30:12.450586   32390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0831 22:30:13.242909   32390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0831 22:30:13.253222   32390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:30:13.272112   32390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:30:13.289461   32390 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:30:13.306177   32390 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:30:13.310622   32390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:30:13.323288   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:30:13.460174   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:30:13.478358   32390 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:30:13.478684   32390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:30:13.478733   32390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:30:13.494270   32390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0831 22:30:13.494721   32390 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:30:13.495175   32390 main.go:141] libmachine: Using API Version  1
	I0831 22:30:13.495195   32390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:30:13.495546   32390 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:30:13.495736   32390 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:30:13.495915   32390 start.go:317] joinCluster: &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:30:13.496070   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0831 22:30:13.496090   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:30:13.498768   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:13.499166   32390 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:30:13.499194   32390 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:30:13.499319   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:30:13.499515   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:30:13.499673   32390 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:30:13.499806   32390 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:30:13.651030   32390 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:30:13.651084   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ihyohl.5nvwjgxowwz1ejsy --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m03 --control-plane --apiserver-advertise-address=192.168.39.26 --apiserver-bind-port=8443"
	I0831 22:30:36.021355   32390 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ihyohl.5nvwjgxowwz1ejsy --discovery-token-ca-cert-hash sha256:9fa6be0895ba5b649f9af03fe61efd50d794d6b8c1010c3b51c25c214821372e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-957517-m03 --control-plane --apiserver-advertise-address=192.168.39.26 --apiserver-bind-port=8443": (22.370247548s)
	I0831 22:30:36.021389   32390 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0831 22:30:36.666541   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957517-m03 minikube.k8s.io/updated_at=2024_08_31T22_30_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=ha-957517 minikube.k8s.io/primary=false
	I0831 22:30:36.782200   32390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957517-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0831 22:30:36.894655   32390 start.go:319] duration metric: took 23.398737337s to joinCluster
	I0831 22:30:36.894733   32390 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:30:36.895064   32390 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:30:36.896743   32390 out.go:177] * Verifying Kubernetes components...
	I0831 22:30:36.898389   32390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:30:37.151123   32390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:30:37.181266   32390 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:30:37.181679   32390 kapi.go:59] client config for ha-957517: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key", CAFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f192a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 22:30:37.181764   32390 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.137:8443
	I0831 22:30:37.182062   32390 node_ready.go:35] waiting up to 6m0s for node "ha-957517-m03" to be "Ready" ...
	I0831 22:30:37.182151   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:37.182162   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:37.182176   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:37.182185   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:37.185908   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:37.683239   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:37.683262   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:37.683273   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:37.683277   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:37.687843   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:38.183119   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:38.183141   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:38.183148   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:38.183153   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:38.187159   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:38.682343   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:38.682373   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:38.682385   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:38.682391   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:38.686020   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:39.182624   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:39.182649   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:39.182660   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:39.182666   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:39.185813   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:39.186458   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:39.683261   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:39.683286   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:39.683294   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:39.683300   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:39.686703   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:40.182678   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:40.182705   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:40.182715   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:40.182720   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:40.186456   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:40.682552   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:40.682572   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:40.682580   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:40.682583   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:40.687031   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:41.182626   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:41.182647   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:41.182653   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:41.182656   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:41.186239   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:41.186889   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:41.683085   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:41.683111   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:41.683123   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:41.683127   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:41.687320   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:42.182442   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:42.182467   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:42.182479   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:42.182485   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:42.185704   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:42.683173   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:42.683196   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:42.683206   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:42.683211   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:42.686679   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:43.182706   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:43.182728   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:43.182739   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:43.182743   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:43.186197   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:43.682319   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:43.682339   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:43.682348   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:43.682354   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:43.685892   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:43.686914   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:44.182675   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:44.182698   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:44.182708   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:44.182712   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:44.186543   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:44.683099   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:44.683119   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:44.683127   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:44.683132   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:44.686468   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:45.182558   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:45.182581   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:45.182592   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:45.182598   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:45.186214   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:45.682223   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:45.682242   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:45.682251   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:45.682255   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:45.686437   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:45.687048   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:46.182832   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:46.182857   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:46.182866   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:46.182872   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:46.186283   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:46.683105   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:46.683130   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:46.683138   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:46.683143   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:46.686663   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:47.182596   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:47.182617   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:47.182624   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:47.182628   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:47.186056   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:47.682514   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:47.682541   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:47.682552   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:47.682560   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:47.686089   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:48.182262   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:48.182282   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:48.182296   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:48.182300   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:48.185340   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:48.185861   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:48.683345   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:48.683369   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:48.683381   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:48.683387   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:48.686730   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:49.182208   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:49.182227   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:49.182236   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:49.182240   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:49.184998   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:49.682281   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:49.682304   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:49.682311   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:49.682316   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:49.685738   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:50.182436   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:50.182459   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:50.182466   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:50.182470   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:50.185718   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:50.186153   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:50.682526   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:50.682547   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:50.682555   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:50.682558   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:50.685921   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:51.182587   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:51.182610   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:51.182619   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:51.182626   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:51.186039   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:51.682731   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:51.682753   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:51.682761   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:51.682764   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:51.686183   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:52.183178   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:52.183205   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:52.183216   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:52.183222   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:52.186501   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:52.187171   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:52.682975   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:52.683002   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:52.683014   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:52.683020   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:52.686693   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:53.182902   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:53.182922   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:53.182930   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:53.182933   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:53.186625   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:53.682801   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:53.682823   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:53.682831   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:53.682835   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:53.686883   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:30:54.182740   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:54.182765   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:54.182773   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:54.182777   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:54.186180   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:54.682742   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:54.682765   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:54.682773   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:54.682777   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:54.686309   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:54.687039   32390 node_ready.go:53] node "ha-957517-m03" has status "Ready":"False"
	I0831 22:30:55.182332   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:55.182361   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:55.182369   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:55.182375   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:55.185743   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:55.682927   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:55.682952   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:55.682960   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:55.682964   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:55.686522   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.182258   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:56.182280   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.182288   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.182291   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.186282   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.683068   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:56.683089   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.683112   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.683116   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.686477   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.687105   32390 node_ready.go:49] node "ha-957517-m03" has status "Ready":"True"
	I0831 22:30:56.687130   32390 node_ready.go:38] duration metric: took 19.505042541s for node "ha-957517-m03" to be "Ready" ...
	I0831 22:30:56.687150   32390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:30:56.687265   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:30:56.687280   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.687288   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.687291   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.692867   32390 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 22:30:56.699462   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.699536   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-k7rsc
	I0831 22:30:56.699547   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.699559   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.699571   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.702651   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:56.703215   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:56.703228   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.703236   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.703239   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.705694   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.706135   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.706150   32390 pod_ready.go:82] duration metric: took 6.667795ms for pod "coredns-6f6b679f8f-k7rsc" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.706158   32390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.706202   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-pc7gn
	I0831 22:30:56.706209   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.706216   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.706222   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.708870   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.709768   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:56.709781   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.709790   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.709794   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.712066   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.712571   32390 pod_ready.go:93] pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.712584   32390 pod_ready.go:82] duration metric: took 6.4208ms for pod "coredns-6f6b679f8f-pc7gn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.712592   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.712633   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517
	I0831 22:30:56.712640   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.712646   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.712653   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.714854   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.715364   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:56.715378   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.715385   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.715390   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.717242   32390 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0831 22:30:56.717667   32390 pod_ready.go:93] pod "etcd-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.717681   32390 pod_ready.go:82] duration metric: took 5.081377ms for pod "etcd-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.717692   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.717783   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m02
	I0831 22:30:56.717794   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.717804   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.717812   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.720147   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.720868   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:56.720887   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.720898   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.720903   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.723247   32390 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 22:30:56.723851   32390 pod_ready.go:93] pod "etcd-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:56.723868   32390 pod_ready.go:82] duration metric: took 6.166126ms for pod "etcd-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.723879   32390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:56.883231   32390 request.go:632] Waited for 159.272181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m03
	I0831 22:30:56.883301   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517-m03
	I0831 22:30:56.883309   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:56.883319   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:56.883344   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:56.887103   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.083282   32390 request.go:632] Waited for 195.276518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:57.083372   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:57.083380   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.083397   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.083403   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.086479   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.087146   32390 pod_ready.go:93] pod "etcd-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:57.087164   32390 pod_ready.go:82] duration metric: took 363.277554ms for pod "etcd-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.087186   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.283721   32390 request.go:632] Waited for 196.468387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:30:57.283784   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517
	I0831 22:30:57.283790   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.283800   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.283806   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.287750   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.484111   32390 request.go:632] Waited for 195.347511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:57.484178   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:57.484185   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.484195   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.484205   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.487283   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.488101   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:57.488120   32390 pod_ready.go:82] duration metric: took 400.923504ms for pod "kube-apiserver-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.488130   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.683294   32390 request.go:632] Waited for 195.094427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:30:57.683392   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m02
	I0831 22:30:57.683402   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.683414   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.683422   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.687181   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.883511   32390 request.go:632] Waited for 195.381148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:57.883565   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:57.883570   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:57.883577   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:57.883580   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:57.886823   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:57.887372   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:57.887393   32390 pod_ready.go:82] duration metric: took 399.255799ms for pod "kube-apiserver-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:57.887402   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.083472   32390 request.go:632] Waited for 195.991565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m03
	I0831 22:30:58.083530   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517-m03
	I0831 22:30:58.083536   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.083543   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.083549   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.087070   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.283176   32390 request.go:632] Waited for 195.281909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:58.283262   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:58.283274   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.283284   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.283291   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.286495   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.287188   32390 pod_ready.go:93] pod "kube-apiserver-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:58.287209   32390 pod_ready.go:82] duration metric: took 399.798926ms for pod "kube-apiserver-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.287221   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.483167   32390 request.go:632] Waited for 195.876889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:30:58.483242   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517
	I0831 22:30:58.483253   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.483266   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.483274   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.486774   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.684037   32390 request.go:632] Waited for 196.343131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:58.684102   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:30:58.684109   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.684117   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.684123   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.688025   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:58.688814   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:58.688837   32390 pod_ready.go:82] duration metric: took 401.604106ms for pod "kube-controller-manager-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.688853   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:58.883936   32390 request.go:632] Waited for 194.998979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:30:58.883998   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m02
	I0831 22:30:58.884003   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:58.884010   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:58.884015   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:58.887937   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.084000   32390 request.go:632] Waited for 195.107632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:59.084053   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:30:59.084058   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.084065   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.084069   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.087199   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.087753   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:59.087770   32390 pod_ready.go:82] duration metric: took 398.906989ms for pod "kube-controller-manager-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.087780   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.283993   32390 request.go:632] Waited for 196.135453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m03
	I0831 22:30:59.284049   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957517-m03
	I0831 22:30:59.284057   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.284066   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.284075   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.287461   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.483706   32390 request.go:632] Waited for 195.38146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.483782   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.483790   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.483801   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.483812   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.487107   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.487734   32390 pod_ready.go:93] pod "kube-controller-manager-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:59.487753   32390 pod_ready.go:82] duration metric: took 399.967358ms for pod "kube-controller-manager-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.487763   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5c5hn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.683854   32390 request.go:632] Waited for 196.033052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5c5hn
	I0831 22:30:59.683954   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5c5hn
	I0831 22:30:59.683966   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.683976   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.683984   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.687475   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.883786   32390 request.go:632] Waited for 195.364934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.883843   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:30:59.883850   32390 round_trippers.go:469] Request Headers:
	I0831 22:30:59.883861   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:30:59.883868   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:30:59.887645   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:30:59.888410   32390 pod_ready.go:93] pod "kube-proxy-5c5hn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:30:59.888433   32390 pod_ready.go:82] duration metric: took 400.662277ms for pod "kube-proxy-5c5hn" in "kube-system" namespace to be "Ready" ...
	I0831 22:30:59.888447   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.083487   32390 request.go:632] Waited for 194.947499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:31:00.083552   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvpbk
	I0831 22:31:00.083559   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.083570   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.083581   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.087488   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:00.283769   32390 request.go:632] Waited for 195.336987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:00.283856   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:00.283864   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.283875   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.283884   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.293253   32390 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0831 22:31:00.293916   32390 pod_ready.go:93] pod "kube-proxy-dvpbk" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:00.293939   32390 pod_ready.go:82] duration metric: took 405.482498ms for pod "kube-proxy-dvpbk" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.293952   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.484062   32390 request.go:632] Waited for 190.030367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:31:00.484130   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xrp64
	I0831 22:31:00.484140   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.484150   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.484158   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.487988   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:00.683177   32390 request.go:632] Waited for 194.320148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:00.683233   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:00.683239   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.683246   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.683250   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.687212   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:00.688205   32390 pod_ready.go:93] pod "kube-proxy-xrp64" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:00.688226   32390 pod_ready.go:82] duration metric: took 394.267834ms for pod "kube-proxy-xrp64" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.688238   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:00.883223   32390 request.go:632] Waited for 194.896382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:31:00.883295   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517
	I0831 22:31:00.883302   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:00.883312   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:00.883321   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:00.886609   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.083395   32390 request.go:632] Waited for 195.863734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:01.083445   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517
	I0831 22:31:01.083451   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.083458   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.083462   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.087010   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.087580   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:01.087606   32390 pod_ready.go:82] duration metric: took 399.360395ms for pod "kube-scheduler-ha-957517" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.087620   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.283642   32390 request.go:632] Waited for 195.940969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:31:01.283718   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m02
	I0831 22:31:01.283727   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.283738   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.283747   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.287223   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.483305   32390 request.go:632] Waited for 195.28996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:01.483408   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m02
	I0831 22:31:01.483417   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.483428   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.483436   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.487095   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.487609   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:01.487630   32390 pod_ready.go:82] duration metric: took 400.001504ms for pod "kube-scheduler-ha-957517-m02" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.487645   32390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.683649   32390 request.go:632] Waited for 195.915486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m03
	I0831 22:31:01.683706   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957517-m03
	I0831 22:31:01.683712   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.683719   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.683724   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.687858   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:31:01.883107   32390 request.go:632] Waited for 194.303617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:31:01.883178   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes/ha-957517-m03
	I0831 22:31:01.883184   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.883190   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.883195   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.887179   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:01.887839   32390 pod_ready.go:93] pod "kube-scheduler-ha-957517-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 22:31:01.887860   32390 pod_ready.go:82] duration metric: took 400.201925ms for pod "kube-scheduler-ha-957517-m03" in "kube-system" namespace to be "Ready" ...
	I0831 22:31:01.887874   32390 pod_ready.go:39] duration metric: took 5.200711661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:31:01.887888   32390 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:31:01.887944   32390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:31:01.904041   32390 api_server.go:72] duration metric: took 25.00927153s to wait for apiserver process to appear ...
	I0831 22:31:01.904069   32390 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:31:01.904091   32390 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0831 22:31:01.908570   32390 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0831 22:31:01.908655   32390 round_trippers.go:463] GET https://192.168.39.137:8443/version
	I0831 22:31:01.908666   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:01.908678   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:01.908682   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:01.909745   32390 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0831 22:31:01.909900   32390 api_server.go:141] control plane version: v1.31.0
	I0831 22:31:01.909922   32390 api_server.go:131] duration metric: took 5.846706ms to wait for apiserver health ...
	I0831 22:31:01.909932   32390 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:31:02.083280   32390 request.go:632] Waited for 173.27165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.083431   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.083443   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.083451   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.083456   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.090427   32390 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 22:31:02.097912   32390 system_pods.go:59] 24 kube-system pods found
	I0831 22:31:02.097946   32390 system_pods.go:61] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:31:02.097952   32390 system_pods.go:61] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:31:02.097956   32390 system_pods.go:61] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:31:02.097960   32390 system_pods.go:61] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:31:02.097963   32390 system_pods.go:61] "etcd-ha-957517-m03" [2633fae5-5ee4-4509-9465-f2b720100d7c] Running
	I0831 22:31:02.097966   32390 system_pods.go:61] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:31:02.097969   32390 system_pods.go:61] "kindnet-jqhdm" [44214ffc-79cc-4762-808b-74c5c5b4c923] Running
	I0831 22:31:02.097972   32390 system_pods.go:61] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:31:02.097976   32390 system_pods.go:61] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:31:02.097979   32390 system_pods.go:61] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:31:02.097982   32390 system_pods.go:61] "kube-apiserver-ha-957517-m03" [43f18bca-f02c-4ca0-8b75-97537a3bc8d0] Running
	I0831 22:31:02.097985   32390 system_pods.go:61] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:31:02.097990   32390 system_pods.go:61] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:31:02.097993   32390 system_pods.go:61] "kube-controller-manager-ha-957517-m03" [534c9743-745b-4a51-b5a9-0bf6b555e504] Running
	I0831 22:31:02.097996   32390 system_pods.go:61] "kube-proxy-5c5hn" [7c2a5860-28aa-4dc3-977f-17291f3e15fa] Running
	I0831 22:31:02.098001   32390 system_pods.go:61] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:31:02.098007   32390 system_pods.go:61] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:31:02.098010   32390 system_pods.go:61] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:31:02.098014   32390 system_pods.go:61] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:31:02.098019   32390 system_pods.go:61] "kube-scheduler-ha-957517-m03" [d2e0a9a9-5dbd-4e8c-9282-2c87d1821a86] Running
	I0831 22:31:02.098022   32390 system_pods.go:61] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:31:02.098028   32390 system_pods.go:61] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:31:02.098031   32390 system_pods.go:61] "kube-vip-ha-957517-m03" [42993b2f-bc3b-436c-9c0f-ba89cce80e72] Running
	I0831 22:31:02.098036   32390 system_pods.go:61] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:31:02.098042   32390 system_pods.go:74] duration metric: took 188.104776ms to wait for pod list to return data ...
	I0831 22:31:02.098053   32390 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:31:02.283477   32390 request.go:632] Waited for 185.355709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:31:02.283532   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/default/serviceaccounts
	I0831 22:31:02.283537   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.283546   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.283552   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.287643   32390 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 22:31:02.287766   32390 default_sa.go:45] found service account: "default"
	I0831 22:31:02.287780   32390 default_sa.go:55] duration metric: took 189.721492ms for default service account to be created ...
	I0831 22:31:02.287788   32390 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:31:02.484140   32390 request.go:632] Waited for 196.257862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.484205   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/namespaces/kube-system/pods
	I0831 22:31:02.484213   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.484224   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.484232   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.490496   32390 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 22:31:02.497868   32390 system_pods.go:86] 24 kube-system pods found
	I0831 22:31:02.497898   32390 system_pods.go:89] "coredns-6f6b679f8f-k7rsc" [30b16969-bc2e-4ad9-b6c3-20b6d6775159] Running
	I0831 22:31:02.497904   32390 system_pods.go:89] "coredns-6f6b679f8f-pc7gn" [a20dc0e7-f1d3-4fca-9dab-e93224a8b342] Running
	I0831 22:31:02.497908   32390 system_pods.go:89] "etcd-ha-957517" [074a0206-92b6-405e-9e9f-2a654b598091] Running
	I0831 22:31:02.497912   32390 system_pods.go:89] "etcd-ha-957517-m02" [d53b90d8-8615-4c06-8843-5c2025d51f08] Running
	I0831 22:31:02.497916   32390 system_pods.go:89] "etcd-ha-957517-m03" [2633fae5-5ee4-4509-9465-f2b720100d7c] Running
	I0831 22:31:02.497919   32390 system_pods.go:89] "kindnet-bmxh2" [5fb4f46f-9210-47d0-b988-c9ca65d1baab] Running
	I0831 22:31:02.497922   32390 system_pods.go:89] "kindnet-jqhdm" [44214ffc-79cc-4762-808b-74c5c5b4c923] Running
	I0831 22:31:02.497926   32390 system_pods.go:89] "kindnet-tkvsc" [0fe590fb-e049-4622-8702-01e32fd77c4e] Running
	I0831 22:31:02.497930   32390 system_pods.go:89] "kube-apiserver-ha-957517" [93d75f0f-7e62-45fc-b66f-bc4020d2903b] Running
	I0831 22:31:02.497934   32390 system_pods.go:89] "kube-apiserver-ha-957517-m02" [f3861fac-12ee-4178-ad06-b2c61deca2cc] Running
	I0831 22:31:02.497937   32390 system_pods.go:89] "kube-apiserver-ha-957517-m03" [43f18bca-f02c-4ca0-8b75-97537a3bc8d0] Running
	I0831 22:31:02.497941   32390 system_pods.go:89] "kube-controller-manager-ha-957517" [90ed2311-3ee4-4086-bac8-df540d369bc7] Running
	I0831 22:31:02.497964   32390 system_pods.go:89] "kube-controller-manager-ha-957517-m02" [1b4d6e53-27fe-40c5-aed9-6e2a75437d15] Running
	I0831 22:31:02.497971   32390 system_pods.go:89] "kube-controller-manager-ha-957517-m03" [534c9743-745b-4a51-b5a9-0bf6b555e504] Running
	I0831 22:31:02.497975   32390 system_pods.go:89] "kube-proxy-5c5hn" [7c2a5860-28aa-4dc3-977f-17291f3e15fa] Running
	I0831 22:31:02.497979   32390 system_pods.go:89] "kube-proxy-dvpbk" [b7453be1-076a-480e-9f02-20f7a1f62108] Running
	I0831 22:31:02.497983   32390 system_pods.go:89] "kube-proxy-xrp64" [e4ac77de-bd1e-4fc5-902e-16f0b5de614c] Running
	I0831 22:31:02.497986   32390 system_pods.go:89] "kube-scheduler-ha-957517" [5dc03172-c09c-43fa-a9bc-c33e70e04e83] Running
	I0831 22:31:02.497991   32390 system_pods.go:89] "kube-scheduler-ha-957517-m02" [d0defdf4-9f01-4a02-aef0-3e838059af5b] Running
	I0831 22:31:02.497994   32390 system_pods.go:89] "kube-scheduler-ha-957517-m03" [d2e0a9a9-5dbd-4e8c-9282-2c87d1821a86] Running
	I0831 22:31:02.497997   32390 system_pods.go:89] "kube-vip-ha-957517" [ed1d414d-9015-488a-98e6-0acd65d07e97] Running
	I0831 22:31:02.498001   32390 system_pods.go:89] "kube-vip-ha-957517-m02" [93e7e07e-807c-420c-aa61-c7b5732836fc] Running
	I0831 22:31:02.498005   32390 system_pods.go:89] "kube-vip-ha-957517-m03" [42993b2f-bc3b-436c-9c0f-ba89cce80e72] Running
	I0831 22:31:02.498008   32390 system_pods.go:89] "storage-provisioner" [b828130a-54f5-4449-9ff5-e47b4236c0dc] Running
	I0831 22:31:02.498023   32390 system_pods.go:126] duration metric: took 210.22695ms to wait for k8s-apps to be running ...
	I0831 22:31:02.498029   32390 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:31:02.498072   32390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:31:02.518083   32390 system_svc.go:56] duration metric: took 20.043969ms WaitForService to wait for kubelet
	I0831 22:31:02.518117   32390 kubeadm.go:582] duration metric: took 25.623350196s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:31:02.518159   32390 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:31:02.683560   32390 request.go:632] Waited for 165.316062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.137:8443/api/v1/nodes
	I0831 22:31:02.683638   32390 round_trippers.go:463] GET https://192.168.39.137:8443/api/v1/nodes
	I0831 22:31:02.683646   32390 round_trippers.go:469] Request Headers:
	I0831 22:31:02.683657   32390 round_trippers.go:473]     Accept: application/json, */*
	I0831 22:31:02.683670   32390 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0831 22:31:02.687591   32390 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 22:31:02.688836   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:31:02.688858   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:31:02.688870   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:31:02.688874   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:31:02.688878   32390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 22:31:02.688881   32390 node_conditions.go:123] node cpu capacity is 2
	I0831 22:31:02.688885   32390 node_conditions.go:105] duration metric: took 170.720648ms to run NodePressure ...
	I0831 22:31:02.688895   32390 start.go:241] waiting for startup goroutines ...
	I0831 22:31:02.688913   32390 start.go:255] writing updated cluster config ...
	I0831 22:31:02.689194   32390 ssh_runner.go:195] Run: rm -f paused
	I0831 22:31:02.739626   32390 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:31:02.741538   32390 out.go:177] * Done! kubectl is now configured to use "ha-957517" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.075131630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bd50a50-7fbf-455e-994c-17d2d0005f65 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.076407972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3da2e0ee-bdd9-4642-bdc3-8af83939e04a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.076901687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143740076877240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3da2e0ee-bdd9-4642-bdc3-8af83939e04a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.077668998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6137905-289a-4180-a578-99383bbeefe1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.077743229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6137905-289a-4180-a578-99383bbeefe1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.078007625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6137905-289a-4180-a578-99383bbeefe1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.117058990Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efc1291e-9620-4a31-a324-9e56a15785af name=/runtime.v1.RuntimeService/Version
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.117138166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efc1291e-9620-4a31-a324-9e56a15785af name=/runtime.v1.RuntimeService/Version
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.118280003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f62049aa-3c8b-46ed-8bc9-6bcdcbb9b07f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.119036072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143740119012546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f62049aa-3c8b-46ed-8bc9-6bcdcbb9b07f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.119655936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ff85f2c-023f-408f-8af2-6bbb46dc8834 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.119732933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ff85f2c-023f-408f-8af2-6bbb46dc8834 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.120100318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ff85f2c-023f-408f-8af2-6bbb46dc8834 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.122413759Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9222b395-4308-4688-89af-10e578ff3dbb name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.122875643Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-zdnwd,Uid:d4c669b0-a0da-4c7e-bc9a-976009a0ee37,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143464866521650,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-31T22:31:03.653729574Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-k7rsc,Uid:30b16969-bc2e-4ad9-b6c3-20b6d6775159,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1725143322568100436,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-31T22:28:42.231710112Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b828130a-54f5-4449-9ff5-e47b4236c0dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143322538993624,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-31T22:28:42.233973582Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-pc7gn,Uid:a20dc0e7-f1d3-4fca-9dab-e93224a8b342,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1725143322532574444,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-31T22:28:42.225169002Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&PodSandboxMetadata{Name:kindnet-tkvsc,Uid:0fe590fb-e049-4622-8702-01e32fd77c4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143306893178239,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-08-31T22:28:25.974878710Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&PodSandboxMetadata{Name:kube-proxy-xrp64,Uid:e4ac77de-bd1e-4fc5-902e-16f0b5de614c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143306888987602,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-31T22:28:25.973262608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&PodSandboxMetadata{Name:etcd-ha-957517,Uid:676db26fc51d314abff76b324bee52f0,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1725143295164630629,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.137:2379,kubernetes.io/config.hash: 676db26fc51d314abff76b324bee52f0,kubernetes.io/config.seen: 2024-08-31T22:28:14.490592226Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-957517,Uid:45a6ec4251f5958391b270ae9be8513b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143295158362854,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251
f5958391b270ae9be8513b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.137:8443,kubernetes.io/config.hash: 45a6ec4251f5958391b270ae9be8513b,kubernetes.io/config.seen: 2024-08-31T22:28:14.490593937Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-957517,Uid:d23a7707049061c750eeb090f3e80738,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143295139993055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{kubernetes.io/config.hash: d23a7707049061c750eeb090f3e80738,kubernetes.io/config.seen: 2024-08-31T22:28:14.490590676Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54c5069584051966a9d8
ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-957517,Uid:f199e54e5de474bccab17312a8e8a1d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143295135863522,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f199e54e5de474bccab17312a8e8a1d5,kubernetes.io/config.seen: 2024-08-31T22:28:14.490583005Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-957517,Uid:09972f10319bc0c3a74ffeb6bb3a4841,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725143295134668740,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 09972f10319bc0c3a74ffeb6bb3a4841,kubernetes.io/config.seen: 2024-08-31T22:28:14.490589035Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9222b395-4308-4688-89af-10e578ff3dbb name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.123579632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fdbd230-3561-4e88-898f-2c89ce2e98fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.123628757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fdbd230-3561-4e88-898f-2c89ce2e98fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.123897101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fdbd230-3561-4e88-898f-2c89ce2e98fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.163053729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d66f45c1-d8f1-402b-af2b-adba2050ca3e name=/runtime.v1.RuntimeService/Version
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.163146978Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d66f45c1-d8f1-402b-af2b-adba2050ca3e name=/runtime.v1.RuntimeService/Version
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.164156699Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cf177ed-458b-43a0-ae04-8bf10bae69c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.164890484Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143740164864048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cf177ed-458b-43a0-ae04-8bf10bae69c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.165351345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86c5b025-8fba-4d8a-8ab6-f8ec3dc90692 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.165477590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86c5b025-8fba-4d8a-8ab6-f8ec3dc90692 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:35:40 ha-957517 crio[660]: time="2024-08-31 22:35:40.165710466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143468325934317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322857992511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143322792493024,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74,PodSandboxId:f447d0de4324d0ecd722f79b97030c213d75a3d5b7d0e863fb67e1f69e87f74b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725143322720288339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725143310935577221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172514330
7100041381,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe,PodSandboxId:4b473227ca455aaf1d97c4a401636fe9c9714a6353948798b471a464e12a0ac3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172514329818
8336047,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d23a7707049061c750eeb090f3e80738,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143295443719436,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143295412236223,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d,PodSandboxId:53f202af525dd691e5b74abdc3e774e238c7c8f1e2ef8631e603348c3eb76c42,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725143295351542234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899,PodSandboxId:54c5069584051966a9d8ceb5c197f04ff75feb8756243462bf80217a2f8c61b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725143295300951149,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86c5b025-8fba-4d8a-8ab6-f8ec3dc90692 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc9ea3c2c4cc4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   9f283cd54a11f       busybox-7dff88458-zdnwd
	4a85b32a796fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   6e863e5cd9b9c       coredns-6f6b679f8f-k7rsc
	0cfba67fe9abb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   298283fc5c9c2       coredns-6f6b679f8f-pc7gn
	c7f58140d0328       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f447d0de4324d       storage-provisioner
	35cc0bc2b6243       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   37828bdcd38b5       kindnet-tkvsc
	b1a123f41fac1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   99877abcdf5a7       kube-proxy-xrp64
	883967c8cb807       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   4b473227ca455       kube-vip-ha-957517
	e1c6a4e36ddb2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   144e67a21ecaa       kube-scheduler-ha-957517
	f3ae732e5626c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   960ae9b08a3ee       etcd-ha-957517
	179da26791305       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   53f202af525dd       kube-apiserver-ha-957517
	f4284e308e02a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   54c5069584051       kube-controller-manager-ha-957517
	
	
	==> coredns [0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49314 - 30950 "HINFO IN 2244475907911654407.2267664286832635684. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013631152s
	[INFO] 10.244.2.2:59391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000367295s
	[INFO] 10.244.2.2:45655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001322338s
	[INFO] 10.244.2.2:45804 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001542866s
	[INFO] 10.244.0.4:36544 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002043312s
	[INFO] 10.244.1.2:34999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003609s
	[INFO] 10.244.1.2:45741 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017294944s
	[INFO] 10.244.1.2:57093 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224681s
	[INFO] 10.244.2.2:49538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000358252s
	[INFO] 10.244.2.2:53732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00185161s
	[INFO] 10.244.2.2:41165 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231402s
	[INFO] 10.244.2.2:60230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118116s
	[INFO] 10.244.2.2:42062 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271609s
	[INFO] 10.244.0.4:49034 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067938s
	[INFO] 10.244.0.4:36002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196492s
	[INFO] 10.244.1.2:54186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124969s
	[INFO] 10.244.1.2:47709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000506218s
	[INFO] 10.244.0.4:54205 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087475s
	[INFO] 10.244.0.4:48802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055159s
	[INFO] 10.244.1.2:46825 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148852s
	[INFO] 10.244.2.2:60523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183145s
	[INFO] 10.244.0.4:53842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116944s
	[INFO] 10.244.0.4:56291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000217808s
	[INFO] 10.244.0.4:53612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00028657s
	
	
	==> coredns [4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e] <==
	[INFO] 10.244.1.2:36845 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203981s
	[INFO] 10.244.2.2:34667 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133457s
	[INFO] 10.244.2.2:42430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001331909s
	[INFO] 10.244.2.2:33158 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151531s
	[INFO] 10.244.0.4:34378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135148s
	[INFO] 10.244.0.4:43334 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001723638s
	[INFO] 10.244.0.4:54010 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080627s
	[INFO] 10.244.0.4:47700 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424459s
	[INFO] 10.244.0.4:50346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070487s
	[INFO] 10.244.0.4:43522 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051146s
	[INFO] 10.244.1.2:60157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099584s
	[INFO] 10.244.1.2:48809 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104515s
	[INFO] 10.244.2.2:37042 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132626s
	[INFO] 10.244.2.2:38343 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117546s
	[INFO] 10.244.2.2:53716 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092804s
	[INFO] 10.244.2.2:59881 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068808s
	[INFO] 10.244.0.4:40431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093051s
	[INFO] 10.244.0.4:39552 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087951s
	[INFO] 10.244.1.2:59301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113713s
	[INFO] 10.244.1.2:40299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210744s
	[INFO] 10.244.1.2:54276 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210063s
	[INFO] 10.244.2.2:34222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307653s
	[INFO] 10.244.2.2:42028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089936s
	[INFO] 10.244.2.2:47927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066426s
	[INFO] 10.244.0.4:39601 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085891s
	
	
	==> describe nodes <==
	Name:               ha-957517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_28_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:35:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:31:25 +0000   Sat, 31 Aug 2024 22:28:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-957517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 438078db78ee43a0bfe8057c915827a8
	  System UUID:                438078db-78ee-43a0-bfe8-057c915827a8
	  Boot ID:                    e88a2dfb-1351-416c-9b78-5a255e623f1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zdnwd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 coredns-6f6b679f8f-k7rsc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m14s
	  kube-system                 coredns-6f6b679f8f-pc7gn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m14s
	  kube-system                 etcd-ha-957517                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m21s
	  kube-system                 kindnet-tkvsc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m15s
	  kube-system                 kube-apiserver-ha-957517             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-controller-manager-ha-957517    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-proxy-xrp64                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-scheduler-ha-957517             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-vip-ha-957517                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m13s                  kube-proxy       
	  Normal  Starting                 7m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m19s (x2 over 7m19s)  kubelet          Node ha-957517 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x2 over 7m19s)  kubelet          Node ha-957517 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s (x2 over 7m19s)  kubelet          Node ha-957517 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m15s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal  NodeReady                6m58s (x2 over 6m58s)  kubelet          Node ha-957517 status is now: NodeReady
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	
	
	Name:               ha-957517-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_29_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:29:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:32:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 31 Aug 2024 22:31:20 +0000   Sat, 31 Aug 2024 22:32:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-957517-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a152f180715c42228f54c353a9e8c1bb
	  System UUID:                a152f180-715c-4222-8f54-c353a9e8c1bb
	  Boot ID:                    475f4e70-e580-4071-92be-a87256c6caa3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cwtrb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 etcd-ha-957517-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-bmxh2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-957517-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-957517-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-dvpbk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-957517-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-957517-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-957517-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m23s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m20s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  NodeNotReady             2m48s                  node-controller  Node ha-957517-m02 status is now: NodeNotReady
	
	
	Name:               ha-957517-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_30_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:30:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:35:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:31:35 +0000   Sat, 31 Aug 2024 22:30:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    ha-957517-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 886d8b963cd94078ae7cf268a2d07053
	  System UUID:                886d8b96-3cd9-4078-ae7c-f268a2d07053
	  Boot ID:                    cf8e9f17-005d-4cb8-af63-0ff51a14233f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fkvvp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 etcd-ha-957517-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m5s
	  kube-system                 kindnet-jqhdm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m7s
	  kube-system                 kube-apiserver-ha-957517-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-ha-957517-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-5c5hn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-scheduler-ha-957517-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-vip-ha-957517-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node ha-957517-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node ha-957517-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node ha-957517-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal  RegisteredNode           4m58s                node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	
	
	Name:               ha-957517-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_31_41_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:31:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:35:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:31:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:31:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:31:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:32:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-957517-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08b180ad339e4d19acb3ea0e7328dc00
	  System UUID:                08b180ad-339e-4d19-acb3-ea0e7328dc00
	  Boot ID:                    eb027e2a-5c22-4721-9b4b-8b9696ccec09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2t9r8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m
	  kube-system                 kube-proxy-6f6xd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m55s            kube-proxy       
	  Normal  RegisteredNode           4m               node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m (x2 over 4m)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x2 over 4m)  kubelet          Node ha-957517-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x2 over 4m)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s            node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  RegisteredNode           3m55s            node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeReady                3m39s            kubelet          Node ha-957517-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug31 22:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050272] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040028] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.780554] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.478094] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.617326] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug31 22:28] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.064763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057170] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.193531] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.118523] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.278233] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.003192] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.620544] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058441] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.958169] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083987] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.815006] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.616164] kauditd_printk_skb: 38 callbacks suppressed
	[Aug31 22:29] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18] <==
	{"level":"warn","ts":"2024-08-31T22:35:40.482546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.482914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.491044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.492564Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.499785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.504007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.507299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.529608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.531674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.535432Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.578872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.584999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.591659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.595686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.598876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.604023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.608185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.614687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.621190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.624250Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.627090Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.630202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.635329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.635590Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:35:40.641445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:35:40 up 7 min,  0 users,  load average: 0.72, 0.42, 0.22
	Linux ha-957517 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23] <==
	I0831 22:35:01.963858       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:35:11.969574       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:35:11.969619       1 main.go:299] handling current node
	I0831 22:35:11.969635       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:35:11.969640       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:35:11.969765       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:35:11.969790       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:35:11.969854       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:35:11.969874       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:35:21.965877       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:35:21.965972       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:35:21.966168       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:35:21.966198       1 main.go:299] handling current node
	I0831 22:35:21.966234       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:35:21.966251       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:35:21.966327       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:35:21.966345       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:35:31.964438       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:35:31.964485       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:35:31.964644       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:35:31.964669       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:35:31.964729       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:35:31.964750       1 main.go:299] handling current node
	I0831 22:35:31.964761       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:35:31.964766       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d] <==
	I0831 22:28:20.298828       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0831 22:28:20.306625       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137]
	I0831 22:28:20.308017       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 22:28:20.312249       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0831 22:28:20.586447       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0831 22:28:21.438940       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0831 22:28:21.452305       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0831 22:28:21.464936       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0831 22:28:25.687561       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0831 22:28:25.936439       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0831 22:31:09.632303       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38730: use of closed network connection
	E0831 22:31:09.821225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38754: use of closed network connection
	E0831 22:31:10.009673       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38772: use of closed network connection
	E0831 22:31:10.224354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38790: use of closed network connection
	E0831 22:31:10.414899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38816: use of closed network connection
	E0831 22:31:10.594803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38846: use of closed network connection
	E0831 22:31:10.784357       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38854: use of closed network connection
	E0831 22:31:10.966814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38880: use of closed network connection
	E0831 22:31:11.165809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38912: use of closed network connection
	E0831 22:31:11.454018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38918: use of closed network connection
	E0831 22:31:11.622063       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38940: use of closed network connection
	E0831 22:31:11.799191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38972: use of closed network connection
	E0831 22:31:11.972716       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38988: use of closed network connection
	E0831 22:31:12.149268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39010: use of closed network connection
	E0831 22:31:12.339572       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39026: use of closed network connection
	
	
	==> kube-controller-manager [f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899] <==
	I0831 22:31:40.644727       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-957517-m04" podCIDRs=["10.244.3.0/24"]
	I0831 22:31:40.644789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:40.644834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:40.656731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:40.914999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:41.092603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:41.465269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:42.269077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:42.347015       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:45.360870       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:45.361356       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-957517-m04"
	I0831 22:31:45.385156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:31:51.026559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:01.064913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:01.065464       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-957517-m04"
	I0831 22:32:01.088211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:02.290136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:11.274284       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:32:52.314276       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:32:52.314736       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-957517-m04"
	I0831 22:32:52.341104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:32:52.378963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.309902ms"
	I0831 22:32:52.379050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.666µs"
	I0831 22:32:55.459662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:32:57.549701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	
	
	==> kube-proxy [b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:28:27.350238       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:28:27.365865       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E0831 22:28:27.366008       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:28:27.407549       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:28:27.407635       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:28:27.407682       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:28:27.410268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:28:27.410698       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:28:27.410744       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:28:27.411896       1 config.go:197] "Starting service config controller"
	I0831 22:28:27.412108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:28:27.412157       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:28:27.412174       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:28:27.412762       1 config.go:326] "Starting node config controller"
	I0831 22:28:27.415567       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:28:27.512346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:28:27.512467       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:28:27.515752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3] <==
	W0831 22:28:19.960560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:28:19.960646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:28:19.981183       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:28:19.981234       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0831 22:28:22.321943       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0831 22:30:33.523418       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jqhdm\": pod kindnet-jqhdm is already assigned to node \"ha-957517-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-jqhdm" node="ha-957517-m03"
	E0831 22:30:33.524317       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 44214ffc-79cc-4762-808b-74c5c5b4c923(kube-system/kindnet-jqhdm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jqhdm"
	E0831 22:30:33.527453       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jqhdm\": pod kindnet-jqhdm is already assigned to node \"ha-957517-m03\"" pod="kube-system/kindnet-jqhdm"
	I0831 22:30:33.527536       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jqhdm" node="ha-957517-m03"
	E0831 22:31:03.668045       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkvvp\": pod busybox-7dff88458-fkvvp is already assigned to node \"ha-957517-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fkvvp" node="ha-957517-m03"
	E0831 22:31:03.669556       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8887e4b3-2a39-4b37-a077-d7deaf9a2772(default/busybox-7dff88458-fkvvp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fkvvp"
	E0831 22:31:03.669647       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkvvp\": pod busybox-7dff88458-fkvvp is already assigned to node \"ha-957517-m03\"" pod="default/busybox-7dff88458-fkvvp"
	I0831 22:31:03.669693       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fkvvp" node="ha-957517-m03"
	E0831 22:31:40.718597       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xmftg\": pod kube-proxy-xmftg is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xmftg" node="ha-957517-m04"
	E0831 22:31:40.718699       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xmftg\": pod kube-proxy-xmftg is already assigned to node \"ha-957517-m04\"" pod="kube-system/kube-proxy-xmftg"
	E0831 22:31:40.725285       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-srxdg\": pod kube-proxy-srxdg is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-srxdg" node="ha-957517-m04"
	E0831 22:31:40.725498       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-srxdg\": pod kube-proxy-srxdg is already assigned to node \"ha-957517-m04\"" pod="kube-system/kube-proxy-srxdg"
	E0831 22:31:40.726133       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2t9r8\": pod kindnet-2t9r8 is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2t9r8" node="ha-957517-m04"
	E0831 22:31:40.726210       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6de5171d-ad2f-4f18-9d99-a6fc3709304c(kube-system/kindnet-2t9r8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2t9r8"
	E0831 22:31:40.726228       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2t9r8\": pod kindnet-2t9r8 is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-2t9r8"
	I0831 22:31:40.726253       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2t9r8" node="ha-957517-m04"
	E0831 22:31:40.731781       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:31:40.731866       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3457f0a0-fd3b-4e40-819f-9d57c29036e6(kube-system/kindnet-mljxh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mljxh"
	E0831 22:31:40.731884       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-mljxh"
	I0831 22:31:40.731900       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	
	
	==> kubelet <==
	Aug 31 22:34:21 ha-957517 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 22:34:21 ha-957517 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 22:34:21 ha-957517 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 22:34:21 ha-957517 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 22:34:21 ha-957517 kubelet[1303]: E0831 22:34:21.523275    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143661522850588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:21 ha-957517 kubelet[1303]: E0831 22:34:21.523312    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143661522850588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:31 ha-957517 kubelet[1303]: E0831 22:34:31.526580    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143671525063817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:31 ha-957517 kubelet[1303]: E0831 22:34:31.527025    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143671525063817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:41 ha-957517 kubelet[1303]: E0831 22:34:41.529063    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681528655792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:41 ha-957517 kubelet[1303]: E0831 22:34:41.529093    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143681528655792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:51 ha-957517 kubelet[1303]: E0831 22:34:51.530988    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143691530631196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:34:51 ha-957517 kubelet[1303]: E0831 22:34:51.531336    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143691530631196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:01 ha-957517 kubelet[1303]: E0831 22:35:01.534172    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143701533209312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:01 ha-957517 kubelet[1303]: E0831 22:35:01.534680    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143701533209312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:11 ha-957517 kubelet[1303]: E0831 22:35:11.537663    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143711537172988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:11 ha-957517 kubelet[1303]: E0831 22:35:11.537686    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143711537172988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:21 ha-957517 kubelet[1303]: E0831 22:35:21.414957    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 22:35:21 ha-957517 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 22:35:21 ha-957517 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 22:35:21 ha-957517 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 22:35:21 ha-957517 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 22:35:21 ha-957517 kubelet[1303]: E0831 22:35:21.539663    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143721539337092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:21 ha-957517 kubelet[1303]: E0831 22:35:21.539703    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143721539337092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:31 ha-957517 kubelet[1303]: E0831 22:35:31.541519    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143731541061961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:35:31 ha-957517 kubelet[1303]: E0831 22:35:31.541545    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725143731541061961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-957517 -n ha-957517
helpers_test.go:262: (dbg) Run:  kubectl --context ha-957517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:286: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (745.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-957517 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-957517 -v=7 --alsologtostderr
E0831 22:36:42.547063   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:37:10.249233   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-957517 -v=7 --alsologtostderr: exit status 82 (2m1.90354488s)

                                                
                                                
-- stdout --
	* Stopping node "ha-957517-m04"  ...
	* Stopping node "ha-957517-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:35:42.072797   38206 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:35:42.073016   38206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:42.073026   38206 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:42.073032   38206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:42.073212   38206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:35:42.073455   38206 out.go:352] Setting JSON to false
	I0831 22:35:42.073562   38206 mustload.go:65] Loading cluster: ha-957517
	I0831 22:35:42.073913   38206 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:35:42.074011   38206 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:35:42.074204   38206 mustload.go:65] Loading cluster: ha-957517
	I0831 22:35:42.074361   38206 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:35:42.074397   38206 stop.go:39] StopHost: ha-957517-m04
	I0831 22:35:42.074785   38206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:42.074839   38206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:42.089238   38206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39947
	I0831 22:35:42.089754   38206 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:42.090374   38206 main.go:141] libmachine: Using API Version  1
	I0831 22:35:42.090402   38206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:42.090786   38206 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:42.093068   38206 out.go:177] * Stopping node "ha-957517-m04"  ...
	I0831 22:35:42.094533   38206 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0831 22:35:42.094565   38206 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:35:42.094770   38206 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0831 22:35:42.094788   38206 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:35:42.097629   38206 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:42.098019   38206 main.go:141] libmachine: (ha-957517-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:08:61", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:31:27 +0000 UTC Type:0 Mac:52:54:00:58:08:61 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-957517-m04 Clientid:01:52:54:00:58:08:61}
	I0831 22:35:42.098051   38206 main.go:141] libmachine: (ha-957517-m04) DBG | domain ha-957517-m04 has defined IP address 192.168.39.109 and MAC address 52:54:00:58:08:61 in network mk-ha-957517
	I0831 22:35:42.098146   38206 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHPort
	I0831 22:35:42.098333   38206 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHKeyPath
	I0831 22:35:42.098480   38206 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHUsername
	I0831 22:35:42.098624   38206 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m04/id_rsa Username:docker}
	I0831 22:35:42.182897   38206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0831 22:35:42.237423   38206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0831 22:35:42.294231   38206 main.go:141] libmachine: Stopping "ha-957517-m04"...
	I0831 22:35:42.294257   38206 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:42.295914   38206 main.go:141] libmachine: (ha-957517-m04) Calling .Stop
	I0831 22:35:42.299293   38206 main.go:141] libmachine: (ha-957517-m04) Waiting for machine to stop 0/120
	I0831 22:35:43.519796   38206 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:35:43.521052   38206 main.go:141] libmachine: Machine "ha-957517-m04" was stopped.
	I0831 22:35:43.521080   38206 stop.go:75] duration metric: took 1.426556706s to stop
	I0831 22:35:43.521100   38206 stop.go:39] StopHost: ha-957517-m03
	I0831 22:35:43.521513   38206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:35:43.521557   38206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:35:43.536517   38206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0831 22:35:43.536892   38206 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:35:43.537386   38206 main.go:141] libmachine: Using API Version  1
	I0831 22:35:43.537408   38206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:35:43.537764   38206 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:35:43.539882   38206 out.go:177] * Stopping node "ha-957517-m03"  ...
	I0831 22:35:43.541251   38206 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0831 22:35:43.541272   38206 main.go:141] libmachine: (ha-957517-m03) Calling .DriverName
	I0831 22:35:43.541518   38206 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0831 22:35:43.541540   38206 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHHostname
	I0831 22:35:43.544452   38206 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:43.545004   38206 main.go:141] libmachine: (ha-957517-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:49", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:29:59 +0000 UTC Type:0 Mac:52:54:00:5e:d5:49 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-957517-m03 Clientid:01:52:54:00:5e:d5:49}
	I0831 22:35:43.545023   38206 main.go:141] libmachine: (ha-957517-m03) DBG | domain ha-957517-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:5e:d5:49 in network mk-ha-957517
	I0831 22:35:43.545199   38206 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHPort
	I0831 22:35:43.545362   38206 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHKeyPath
	I0831 22:35:43.545492   38206 main.go:141] libmachine: (ha-957517-m03) Calling .GetSSHUsername
	I0831 22:35:43.545685   38206 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m03/id_rsa Username:docker}
	I0831 22:35:43.631626   38206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0831 22:35:43.685129   38206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0831 22:35:43.739695   38206 main.go:141] libmachine: Stopping "ha-957517-m03"...
	I0831 22:35:43.739727   38206 main.go:141] libmachine: (ha-957517-m03) Calling .GetState
	I0831 22:35:43.741304   38206 main.go:141] libmachine: (ha-957517-m03) Calling .Stop
	I0831 22:35:43.744854   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 0/120
	I0831 22:35:44.747052   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 1/120
	I0831 22:35:45.748334   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 2/120
	I0831 22:35:46.749684   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 3/120
	I0831 22:35:47.751751   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 4/120
	I0831 22:35:48.753629   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 5/120
	I0831 22:35:49.755198   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 6/120
	I0831 22:35:50.757624   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 7/120
	I0831 22:35:51.758892   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 8/120
	I0831 22:35:52.760589   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 9/120
	I0831 22:35:53.762628   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 10/120
	I0831 22:35:54.764142   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 11/120
	I0831 22:35:55.765554   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 12/120
	I0831 22:35:56.767181   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 13/120
	I0831 22:35:57.768833   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 14/120
	I0831 22:35:58.770032   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 15/120
	I0831 22:35:59.771550   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 16/120
	I0831 22:36:00.772858   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 17/120
	I0831 22:36:01.774563   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 18/120
	I0831 22:36:02.775768   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 19/120
	I0831 22:36:03.777424   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 20/120
	I0831 22:36:04.778816   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 21/120
	I0831 22:36:05.780133   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 22/120
	I0831 22:36:06.781500   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 23/120
	I0831 22:36:07.783190   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 24/120
	I0831 22:36:08.784884   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 25/120
	I0831 22:36:09.786208   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 26/120
	I0831 22:36:10.787616   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 27/120
	I0831 22:36:11.790005   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 28/120
	I0831 22:36:12.791389   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 29/120
	I0831 22:36:13.792914   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 30/120
	I0831 22:36:14.794546   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 31/120
	I0831 22:36:15.796097   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 32/120
	I0831 22:36:16.797385   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 33/120
	I0831 22:36:17.799001   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 34/120
	I0831 22:36:18.800641   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 35/120
	I0831 22:36:19.802022   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 36/120
	I0831 22:36:20.803532   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 37/120
	I0831 22:36:21.804895   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 38/120
	I0831 22:36:22.806166   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 39/120
	I0831 22:36:23.807997   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 40/120
	I0831 22:36:24.809359   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 41/120
	I0831 22:36:25.810904   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 42/120
	I0831 22:36:26.812260   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 43/120
	I0831 22:36:27.813949   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 44/120
	I0831 22:36:28.815696   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 45/120
	I0831 22:36:29.818100   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 46/120
	I0831 22:36:30.819638   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 47/120
	I0831 22:36:31.822241   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 48/120
	I0831 22:36:32.823983   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 49/120
	I0831 22:36:33.825729   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 50/120
	I0831 22:36:34.827255   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 51/120
	I0831 22:36:35.828664   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 52/120
	I0831 22:36:36.830184   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 53/120
	I0831 22:36:37.831730   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 54/120
	I0831 22:36:38.833452   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 55/120
	I0831 22:36:39.834720   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 56/120
	I0831 22:36:40.836148   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 57/120
	I0831 22:36:41.837442   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 58/120
	I0831 22:36:42.838758   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 59/120
	I0831 22:36:43.840424   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 60/120
	I0831 22:36:44.841951   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 61/120
	I0831 22:36:45.843770   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 62/120
	I0831 22:36:46.845749   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 63/120
	I0831 22:36:47.847010   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 64/120
	I0831 22:36:48.849250   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 65/120
	I0831 22:36:49.850421   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 66/120
	I0831 22:36:50.851702   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 67/120
	I0831 22:36:51.852969   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 68/120
	I0831 22:36:52.854109   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 69/120
	I0831 22:36:53.855507   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 70/120
	I0831 22:36:54.856839   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 71/120
	I0831 22:36:55.858179   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 72/120
	I0831 22:36:56.859829   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 73/120
	I0831 22:36:57.861091   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 74/120
	I0831 22:36:58.862692   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 75/120
	I0831 22:36:59.864146   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 76/120
	I0831 22:37:00.865524   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 77/120
	I0831 22:37:01.867099   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 78/120
	I0831 22:37:02.868473   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 79/120
	I0831 22:37:03.870770   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 80/120
	I0831 22:37:04.872270   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 81/120
	I0831 22:37:05.873563   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 82/120
	I0831 22:37:06.875433   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 83/120
	I0831 22:37:07.876716   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 84/120
	I0831 22:37:08.878272   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 85/120
	I0831 22:37:09.879560   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 86/120
	I0831 22:37:10.880855   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 87/120
	I0831 22:37:11.882355   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 88/120
	I0831 22:37:12.883850   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 89/120
	I0831 22:37:13.885321   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 90/120
	I0831 22:37:14.886758   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 91/120
	I0831 22:37:15.887857   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 92/120
	I0831 22:37:16.889200   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 93/120
	I0831 22:37:17.890366   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 94/120
	I0831 22:37:18.891856   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 95/120
	I0831 22:37:19.893155   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 96/120
	I0831 22:37:20.894299   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 97/120
	I0831 22:37:21.895564   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 98/120
	I0831 22:37:22.897023   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 99/120
	I0831 22:37:23.898680   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 100/120
	I0831 22:37:24.900146   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 101/120
	I0831 22:37:25.901496   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 102/120
	I0831 22:37:26.902757   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 103/120
	I0831 22:37:27.904274   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 104/120
	I0831 22:37:28.906001   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 105/120
	I0831 22:37:29.907287   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 106/120
	I0831 22:37:30.908644   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 107/120
	I0831 22:37:31.909921   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 108/120
	I0831 22:37:32.911570   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 109/120
	I0831 22:37:33.913344   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 110/120
	I0831 22:37:34.914669   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 111/120
	I0831 22:37:35.916255   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 112/120
	I0831 22:37:36.918434   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 113/120
	I0831 22:37:37.919970   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 114/120
	I0831 22:37:38.922180   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 115/120
	I0831 22:37:39.923436   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 116/120
	I0831 22:37:40.924707   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 117/120
	I0831 22:37:41.925948   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 118/120
	I0831 22:37:42.927093   38206 main.go:141] libmachine: (ha-957517-m03) Waiting for machine to stop 119/120
	I0831 22:37:43.927886   38206 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0831 22:37:43.927926   38206 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0831 22:37:43.930147   38206 out.go:201] 
	W0831 22:37:43.931612   38206 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0831 22:37:43.931629   38206 out.go:270] * 
	* 
	W0831 22:37:43.933803   38206 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 22:37:43.935067   38206 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-957517 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-957517 --wait=true -v=7 --alsologtostderr
E0831 22:39:59.874602   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:41:22.942694   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:41:42.547415   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:44:59.875514   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:46:42.547106   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-957517 --wait=true -v=7 --alsologtostderr: exit status 80 (10m21.366444535s)

                                                
                                                
-- stdout --
	* [ha-957517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-957517" primary control-plane node in "ha-957517" cluster
	* Updating the running kvm2 "ha-957517" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-957517-m02" control-plane node in "ha-957517" cluster
	* Restarting existing kvm2 VM for "ha-957517-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.137
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.137
	* Verifying Kubernetes components...
	
	* Starting "ha-957517-m03" control-plane node in "ha-957517" cluster
	* Restarting existing kvm2 VM for "ha-957517-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.137,192.168.39.61
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.137
	  - env NO_PROXY=192.168.39.137,192.168.39.61
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:37:43.980883   38680 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:37:43.981003   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981012   38680 out.go:358] Setting ErrFile to fd 2...
	I0831 22:37:43.981017   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981185   38680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:37:43.981743   38680 out.go:352] Setting JSON to false
	I0831 22:37:43.982668   38680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4811,"bootTime":1725139053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:37:43.982721   38680 start.go:139] virtualization: kvm guest
	I0831 22:37:43.985184   38680 out.go:177] * [ha-957517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:37:43.986509   38680 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:37:43.986515   38680 notify.go:220] Checking for updates...
	I0831 22:37:43.989086   38680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:37:43.990438   38680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:37:43.991747   38680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:37:43.992969   38680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:37:43.994015   38680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:37:43.995541   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:43.995622   38680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:37:43.995993   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:43.996068   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.011776   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0831 22:37:44.012162   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.012650   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.012667   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.012988   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.013198   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.046996   38680 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 22:37:44.048362   38680 start.go:297] selected driver: kvm2
	I0831 22:37:44.048377   38680 start.go:901] validating driver "kvm2" against &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.048522   38680 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:37:44.048853   38680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.048953   38680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:37:44.063722   38680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:37:44.064393   38680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:37:44.064470   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:37:44.064486   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:37:44.064562   38680 start.go:340] cluster config:
	{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.064759   38680 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.066648   38680 out.go:177] * Starting "ha-957517" primary control-plane node in "ha-957517" cluster
	I0831 22:37:44.067887   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:37:44.067918   38680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:37:44.067925   38680 cache.go:56] Caching tarball of preloaded images
	I0831 22:37:44.067991   38680 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:37:44.068000   38680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:37:44.068132   38680 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:37:44.068445   38680 start.go:360] acquireMachinesLock for ha-957517: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:37:44.068502   38680 start.go:364] duration metric: took 31.59µs to acquireMachinesLock for "ha-957517"
	I0831 22:37:44.068521   38680 start.go:96] Skipping create...Using existing machine configuration
	I0831 22:37:44.068531   38680 fix.go:54] fixHost starting: 
	I0831 22:37:44.068854   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:44.068905   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.082801   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0831 22:37:44.083254   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.083798   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.083819   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.084088   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.084260   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.084412   38680 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:37:44.086212   38680 fix.go:112] recreateIfNeeded on ha-957517: state=Running err=<nil>
	W0831 22:37:44.086242   38680 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 22:37:44.088190   38680 out.go:177] * Updating the running kvm2 "ha-957517" VM ...
	I0831 22:37:44.089385   38680 machine.go:93] provisionDockerMachine start ...
	I0831 22:37:44.089401   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.089624   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.092086   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092623   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.092649   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092785   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.092955   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093100   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093214   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.093355   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.093526   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.093536   38680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:37:44.200556   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.200584   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.200847   38680 buildroot.go:166] provisioning hostname "ha-957517"
	I0831 22:37:44.200870   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.201116   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.203857   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204273   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.204297   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204424   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.204626   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204766   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204881   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.205020   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.205217   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.205231   38680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517 && echo "ha-957517" | sudo tee /etc/hostname
	I0831 22:37:44.330466   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.330490   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.333462   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333829   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.333868   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333997   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.334236   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334427   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334627   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.334794   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.334953   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.334968   38680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:37:44.440566   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:37:44.440594   38680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:37:44.440626   38680 buildroot.go:174] setting up certificates
	I0831 22:37:44.440636   38680 provision.go:84] configureAuth start
	I0831 22:37:44.440648   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.440934   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:37:44.443531   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.443928   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.443954   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.444251   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.446892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447301   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.447348   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447478   38680 provision.go:143] copyHostCerts
	I0831 22:37:44.447502   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447538   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:37:44.447557   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447632   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:37:44.447757   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447782   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:37:44.447790   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447831   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:37:44.447904   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447927   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:37:44.447935   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447966   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:37:44.448033   38680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517 san=[127.0.0.1 192.168.39.137 ha-957517 localhost minikube]
	I0831 22:37:44.517123   38680 provision.go:177] copyRemoteCerts
	I0831 22:37:44.517176   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:37:44.517197   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.519747   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520161   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.520195   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520321   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.520494   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.520656   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.520777   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:37:44.602311   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:37:44.602376   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:37:44.631362   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:37:44.631445   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0831 22:37:44.663123   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:37:44.663190   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:37:44.691526   38680 provision.go:87] duration metric: took 250.877979ms to configureAuth
	I0831 22:37:44.691553   38680 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:37:44.691854   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:44.691944   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.694465   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.694868   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.694892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.695159   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.695350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695512   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695618   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.695764   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.695955   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.695971   38680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:39:15.634828   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:39:15.634858   38680 machine.go:96] duration metric: took 1m31.54546155s to provisionDockerMachine
	I0831 22:39:15.634870   38680 start.go:293] postStartSetup for "ha-957517" (driver="kvm2")
	I0831 22:39:15.634881   38680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:39:15.634896   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.635202   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:39:15.635227   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.638236   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638748   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.638776   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638909   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.639093   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.639293   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.639429   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.722855   38680 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:39:15.727014   38680 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:39:15.727034   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:39:15.727097   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:39:15.727199   38680 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:39:15.727212   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:39:15.727302   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:39:15.736663   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:15.760234   38680 start.go:296] duration metric: took 125.353074ms for postStartSetup
	I0831 22:39:15.760279   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.760559   38680 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0831 22:39:15.760588   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.763201   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763613   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.763633   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763770   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.763954   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.764091   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.764216   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	W0831 22:39:15.846286   38680 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0831 22:39:15.846323   38680 fix.go:56] duration metric: took 1m31.777792266s for fixHost
	I0831 22:39:15.846350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.848916   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849334   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.849365   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849543   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.849722   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.849879   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.850019   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.850187   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:39:15.850351   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:39:15.850361   38680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:39:15.951938   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143955.913478582
	
	I0831 22:39:15.951960   38680 fix.go:216] guest clock: 1725143955.913478582
	I0831 22:39:15.951967   38680 fix.go:229] Guest: 2024-08-31 22:39:15.913478582 +0000 UTC Remote: 2024-08-31 22:39:15.846332814 +0000 UTC m=+91.900956878 (delta=67.145768ms)
	I0831 22:39:15.951984   38680 fix.go:200] guest clock delta is within tolerance: 67.145768ms
	I0831 22:39:15.951989   38680 start.go:83] releasing machines lock for "ha-957517", held for 1m31.883475675s
	I0831 22:39:15.952012   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.952276   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:15.955057   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955473   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.955502   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955634   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956283   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956455   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956567   38680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:39:15.956617   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.956632   38680 ssh_runner.go:195] Run: cat /version.json
	I0831 22:39:15.956655   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.959097   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959114   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959529   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959554   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959578   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959597   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959696   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959871   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959900   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960042   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960055   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960225   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960234   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.960339   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:16.064767   38680 ssh_runner.go:195] Run: systemctl --version
	I0831 22:39:16.070863   38680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:39:16.230376   38680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:39:16.236729   38680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:39:16.236783   38680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:39:16.245939   38680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 22:39:16.245960   38680 start.go:495] detecting cgroup driver to use...
	I0831 22:39:16.246006   38680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:39:16.261896   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:39:16.276357   38680 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:39:16.276410   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:39:16.289922   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:39:16.302913   38680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:39:16.451294   38680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:39:16.596005   38680 docker.go:233] disabling docker service ...
	I0831 22:39:16.596062   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:39:16.612423   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:39:16.625984   38680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:39:16.769630   38680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:39:16.915592   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:39:16.929353   38680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:39:16.949875   38680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:39:16.949927   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.960342   38680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:39:16.960402   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.970745   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.980972   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.991090   38680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:39:17.001258   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.011096   38680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.021887   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.031682   38680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:39:17.040513   38680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:39:17.049301   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.194385   38680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:39:17.428315   38680 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:39:17.428408   38680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:39:17.433515   38680 start.go:563] Will wait 60s for crictl version
	I0831 22:39:17.433556   38680 ssh_runner.go:195] Run: which crictl
	I0831 22:39:17.437499   38680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:39:17.479960   38680 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:39:17.480026   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.515314   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.547505   38680 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:39:17.548632   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:17.550955   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551269   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:17.551296   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551521   38680 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:39:17.556237   38680 kubeadm.go:883] updating cluster {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:39:17.556363   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:39:17.556415   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.600319   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.600339   38680 crio.go:433] Images already preloaded, skipping extraction
	I0831 22:39:17.600382   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.634386   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.634406   38680 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:39:17.634416   38680 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.31.0 crio true true} ...
	I0831 22:39:17.634526   38680 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:39:17.634618   38680 ssh_runner.go:195] Run: crio config
	I0831 22:39:17.682178   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:39:17.682203   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:39:17.682220   38680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:39:17.682240   38680 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957517 NodeName:ha-957517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:39:17.682375   38680 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-957517"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:39:17.682399   38680 kube-vip.go:115] generating kube-vip config ...
	I0831 22:39:17.682439   38680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:39:17.694650   38680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:39:17.694772   38680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:39:17.694843   38680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:39:17.705040   38680 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:39:17.705103   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 22:39:17.714471   38680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0831 22:39:17.733900   38680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:39:17.754099   38680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0831 22:39:17.773312   38680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:39:17.792847   38680 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:39:17.797963   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.955439   38680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:39:17.970324   38680 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.137
	I0831 22:39:17.970348   38680 certs.go:194] generating shared ca certs ...
	I0831 22:39:17.970363   38680 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:17.970501   38680 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:39:17.970573   38680 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:39:17.970590   38680 certs.go:256] generating profile certs ...
	I0831 22:39:17.970697   38680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:39:17.970732   38680 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56
	I0831 22:39:17.970747   38680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.26 192.168.39.254]
	I0831 22:39:18.083143   38680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 ...
	I0831 22:39:18.083186   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56: {Name:mk489dd79b841ee44fa8d66455c5fed8039b89dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083399   38680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 ...
	I0831 22:39:18.083417   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56: {Name:mkbcff44832282605e436763bcf5c32528ce79a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083523   38680 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:39:18.083680   38680 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:39:18.083806   38680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:39:18.083821   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:39:18.083834   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:39:18.083847   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:39:18.083860   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:39:18.083873   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:39:18.083885   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:39:18.083901   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:39:18.083913   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:39:18.083956   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:39:18.083983   38680 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:39:18.083992   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:39:18.084015   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:39:18.084037   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:39:18.084058   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:39:18.084099   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:18.084124   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.084138   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.084150   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.084726   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:39:18.111136   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:39:18.134120   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:39:18.157775   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:39:18.181362   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0831 22:39:18.205148   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:39:18.229117   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:39:18.252441   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:39:18.276005   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:39:18.298954   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:39:18.321901   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:39:18.345593   38680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:39:18.363290   38680 ssh_runner.go:195] Run: openssl version
	I0831 22:39:18.369103   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:39:18.379738   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384052   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384104   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.389812   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:39:18.399006   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:39:18.409817   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414246   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414294   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.419998   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:39:18.429270   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:39:18.439988   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444351   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444394   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.450124   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:39:18.459442   38680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:39:18.463809   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 22:39:18.469261   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 22:39:18.474818   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 22:39:18.480052   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 22:39:18.485805   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 22:39:18.490982   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 22:39:18.496430   38680 kubeadm.go:392] StartCluster: {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:39:18.496538   38680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:39:18.496594   38680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:39:18.540006   38680 cri.go:89] found id: "74033e6de6f78771aa278fb3bf2337b2694d3624100dd7e11f196f8efd688612"
	I0831 22:39:18.540034   38680 cri.go:89] found id: "829e2803166e8b4f563134db85ca290dee0f761c7f98598b5808a7653b837f29"
	I0831 22:39:18.540039   38680 cri.go:89] found id: "ce5a5113d787c6fa00a34027dbed5a4c4a2879f803312b2f06a9b73b7fabb497"
	I0831 22:39:18.540042   38680 cri.go:89] found id: "4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e"
	I0831 22:39:18.540044   38680 cri.go:89] found id: "0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6"
	I0831 22:39:18.540047   38680 cri.go:89] found id: "c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74"
	I0831 22:39:18.540050   38680 cri.go:89] found id: "35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23"
	I0831 22:39:18.540052   38680 cri.go:89] found id: "b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d"
	I0831 22:39:18.540055   38680 cri.go:89] found id: "883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe"
	I0831 22:39:18.540061   38680 cri.go:89] found id: "e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3"
	I0831 22:39:18.540073   38680 cri.go:89] found id: "f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18"
	I0831 22:39:18.540077   38680 cri.go:89] found id: "179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d"
	I0831 22:39:18.540081   38680 cri.go:89] found id: "f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899"
	I0831 22:39:18.540085   38680 cri.go:89] found id: ""
	I0831 22:39:18.540125   38680 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-957517 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-957517
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-957517 -n ha-957517
E0831 22:48:05.610596   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:245: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p ha-957517 logs -n 25: (1.83246745s)
helpers_test.go:253: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m04 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp testdata/cp-test.txt                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517:/home/docker/cp-test_ha-957517-m04_ha-957517.txt                       |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517 sudo cat                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517.txt                                 |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03:/home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m03 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-957517 node stop m02 -v=7                                                     | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-957517 node start m02 -v=7                                                    | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-957517 -v=7                                                           | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-957517 -v=7                                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-957517 --wait=true -v=7                                                    | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-957517                                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:37:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:37:43.980883   38680 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:37:43.981003   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981012   38680 out.go:358] Setting ErrFile to fd 2...
	I0831 22:37:43.981017   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981185   38680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:37:43.981743   38680 out.go:352] Setting JSON to false
	I0831 22:37:43.982668   38680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4811,"bootTime":1725139053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:37:43.982721   38680 start.go:139] virtualization: kvm guest
	I0831 22:37:43.985184   38680 out.go:177] * [ha-957517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:37:43.986509   38680 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:37:43.986515   38680 notify.go:220] Checking for updates...
	I0831 22:37:43.989086   38680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:37:43.990438   38680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:37:43.991747   38680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:37:43.992969   38680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:37:43.994015   38680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:37:43.995541   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:43.995622   38680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:37:43.995993   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:43.996068   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.011776   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0831 22:37:44.012162   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.012650   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.012667   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.012988   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.013198   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.046996   38680 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 22:37:44.048362   38680 start.go:297] selected driver: kvm2
	I0831 22:37:44.048377   38680 start.go:901] validating driver "kvm2" against &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.048522   38680 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:37:44.048853   38680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.048953   38680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:37:44.063722   38680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:37:44.064393   38680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:37:44.064470   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:37:44.064486   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:37:44.064562   38680 start.go:340] cluster config:
	{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.064759   38680 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.066648   38680 out.go:177] * Starting "ha-957517" primary control-plane node in "ha-957517" cluster
	I0831 22:37:44.067887   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:37:44.067918   38680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:37:44.067925   38680 cache.go:56] Caching tarball of preloaded images
	I0831 22:37:44.067991   38680 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:37:44.068000   38680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:37:44.068132   38680 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:37:44.068445   38680 start.go:360] acquireMachinesLock for ha-957517: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:37:44.068502   38680 start.go:364] duration metric: took 31.59µs to acquireMachinesLock for "ha-957517"
	I0831 22:37:44.068521   38680 start.go:96] Skipping create...Using existing machine configuration
	I0831 22:37:44.068531   38680 fix.go:54] fixHost starting: 
	I0831 22:37:44.068854   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:44.068905   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.082801   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0831 22:37:44.083254   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.083798   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.083819   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.084088   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.084260   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.084412   38680 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:37:44.086212   38680 fix.go:112] recreateIfNeeded on ha-957517: state=Running err=<nil>
	W0831 22:37:44.086242   38680 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 22:37:44.088190   38680 out.go:177] * Updating the running kvm2 "ha-957517" VM ...
	I0831 22:37:44.089385   38680 machine.go:93] provisionDockerMachine start ...
	I0831 22:37:44.089401   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.089624   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.092086   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092623   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.092649   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092785   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.092955   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093100   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093214   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.093355   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.093526   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.093536   38680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:37:44.200556   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.200584   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.200847   38680 buildroot.go:166] provisioning hostname "ha-957517"
	I0831 22:37:44.200870   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.201116   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.203857   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204273   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.204297   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204424   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.204626   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204766   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204881   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.205020   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.205217   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.205231   38680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517 && echo "ha-957517" | sudo tee /etc/hostname
	I0831 22:37:44.330466   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.330490   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.333462   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333829   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.333868   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333997   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.334236   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334427   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334627   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.334794   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.334953   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.334968   38680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:37:44.440566   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:37:44.440594   38680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:37:44.440626   38680 buildroot.go:174] setting up certificates
	I0831 22:37:44.440636   38680 provision.go:84] configureAuth start
	I0831 22:37:44.440648   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.440934   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:37:44.443531   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.443928   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.443954   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.444251   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.446892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447301   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.447348   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447478   38680 provision.go:143] copyHostCerts
	I0831 22:37:44.447502   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447538   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:37:44.447557   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447632   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:37:44.447757   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447782   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:37:44.447790   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447831   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:37:44.447904   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447927   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:37:44.447935   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447966   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:37:44.448033   38680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517 san=[127.0.0.1 192.168.39.137 ha-957517 localhost minikube]
	I0831 22:37:44.517123   38680 provision.go:177] copyRemoteCerts
	I0831 22:37:44.517176   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:37:44.517197   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.519747   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520161   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.520195   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520321   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.520494   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.520656   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.520777   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:37:44.602311   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:37:44.602376   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:37:44.631362   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:37:44.631445   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0831 22:37:44.663123   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:37:44.663190   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:37:44.691526   38680 provision.go:87] duration metric: took 250.877979ms to configureAuth
	I0831 22:37:44.691553   38680 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:37:44.691854   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:44.691944   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.694465   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.694868   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.694892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.695159   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.695350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695512   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695618   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.695764   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.695955   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.695971   38680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:39:15.634828   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:39:15.634858   38680 machine.go:96] duration metric: took 1m31.54546155s to provisionDockerMachine
	I0831 22:39:15.634870   38680 start.go:293] postStartSetup for "ha-957517" (driver="kvm2")
	I0831 22:39:15.634881   38680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:39:15.634896   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.635202   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:39:15.635227   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.638236   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638748   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.638776   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638909   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.639093   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.639293   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.639429   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.722855   38680 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:39:15.727014   38680 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:39:15.727034   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:39:15.727097   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:39:15.727199   38680 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:39:15.727212   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:39:15.727302   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:39:15.736663   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:15.760234   38680 start.go:296] duration metric: took 125.353074ms for postStartSetup
	I0831 22:39:15.760279   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.760559   38680 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0831 22:39:15.760588   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.763201   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763613   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.763633   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763770   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.763954   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.764091   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.764216   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	W0831 22:39:15.846286   38680 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0831 22:39:15.846323   38680 fix.go:56] duration metric: took 1m31.777792266s for fixHost
	I0831 22:39:15.846350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.848916   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849334   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.849365   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849543   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.849722   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.849879   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.850019   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.850187   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:39:15.850351   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:39:15.850361   38680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:39:15.951938   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143955.913478582
	
	I0831 22:39:15.951960   38680 fix.go:216] guest clock: 1725143955.913478582
	I0831 22:39:15.951967   38680 fix.go:229] Guest: 2024-08-31 22:39:15.913478582 +0000 UTC Remote: 2024-08-31 22:39:15.846332814 +0000 UTC m=+91.900956878 (delta=67.145768ms)
	I0831 22:39:15.951984   38680 fix.go:200] guest clock delta is within tolerance: 67.145768ms
	I0831 22:39:15.951989   38680 start.go:83] releasing machines lock for "ha-957517", held for 1m31.883475675s
	I0831 22:39:15.952012   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.952276   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:15.955057   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955473   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.955502   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955634   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956283   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956455   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956567   38680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:39:15.956617   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.956632   38680 ssh_runner.go:195] Run: cat /version.json
	I0831 22:39:15.956655   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.959097   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959114   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959529   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959554   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959578   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959597   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959696   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959871   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959900   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960042   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960055   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960225   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960234   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.960339   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:16.064767   38680 ssh_runner.go:195] Run: systemctl --version
	I0831 22:39:16.070863   38680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:39:16.230376   38680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:39:16.236729   38680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:39:16.236783   38680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:39:16.245939   38680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 22:39:16.245960   38680 start.go:495] detecting cgroup driver to use...
	I0831 22:39:16.246006   38680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:39:16.261896   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:39:16.276357   38680 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:39:16.276410   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:39:16.289922   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:39:16.302913   38680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:39:16.451294   38680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:39:16.596005   38680 docker.go:233] disabling docker service ...
	I0831 22:39:16.596062   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:39:16.612423   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:39:16.625984   38680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:39:16.769630   38680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:39:16.915592   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:39:16.929353   38680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:39:16.949875   38680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:39:16.949927   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.960342   38680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:39:16.960402   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.970745   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.980972   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.991090   38680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:39:17.001258   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.011096   38680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.021887   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.031682   38680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:39:17.040513   38680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:39:17.049301   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.194385   38680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:39:17.428315   38680 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:39:17.428408   38680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:39:17.433515   38680 start.go:563] Will wait 60s for crictl version
	I0831 22:39:17.433556   38680 ssh_runner.go:195] Run: which crictl
	I0831 22:39:17.437499   38680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:39:17.479960   38680 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:39:17.480026   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.515314   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.547505   38680 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:39:17.548632   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:17.550955   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551269   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:17.551296   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551521   38680 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:39:17.556237   38680 kubeadm.go:883] updating cluster {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:39:17.556363   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:39:17.556415   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.600319   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.600339   38680 crio.go:433] Images already preloaded, skipping extraction
	I0831 22:39:17.600382   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.634386   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.634406   38680 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:39:17.634416   38680 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.31.0 crio true true} ...
	I0831 22:39:17.634526   38680 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:39:17.634618   38680 ssh_runner.go:195] Run: crio config
	I0831 22:39:17.682178   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:39:17.682203   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:39:17.682220   38680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:39:17.682240   38680 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957517 NodeName:ha-957517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:39:17.682375   38680 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-957517"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:39:17.682399   38680 kube-vip.go:115] generating kube-vip config ...
	I0831 22:39:17.682439   38680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:39:17.694650   38680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:39:17.694772   38680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:39:17.694843   38680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:39:17.705040   38680 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:39:17.705103   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 22:39:17.714471   38680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0831 22:39:17.733900   38680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:39:17.754099   38680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0831 22:39:17.773312   38680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:39:17.792847   38680 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:39:17.797963   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.955439   38680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:39:17.970324   38680 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.137
	I0831 22:39:17.970348   38680 certs.go:194] generating shared ca certs ...
	I0831 22:39:17.970363   38680 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:17.970501   38680 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:39:17.970573   38680 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:39:17.970590   38680 certs.go:256] generating profile certs ...
	I0831 22:39:17.970697   38680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:39:17.970732   38680 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56
	I0831 22:39:17.970747   38680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.26 192.168.39.254]
	I0831 22:39:18.083143   38680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 ...
	I0831 22:39:18.083186   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56: {Name:mk489dd79b841ee44fa8d66455c5fed8039b89dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083399   38680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 ...
	I0831 22:39:18.083417   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56: {Name:mkbcff44832282605e436763bcf5c32528ce79a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083523   38680 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:39:18.083680   38680 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:39:18.083806   38680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:39:18.083821   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:39:18.083834   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:39:18.083847   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:39:18.083860   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:39:18.083873   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:39:18.083885   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:39:18.083901   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:39:18.083913   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:39:18.083956   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:39:18.083983   38680 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:39:18.083992   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:39:18.084015   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:39:18.084037   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:39:18.084058   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:39:18.084099   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:18.084124   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.084138   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.084150   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.084726   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:39:18.111136   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:39:18.134120   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:39:18.157775   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:39:18.181362   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0831 22:39:18.205148   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:39:18.229117   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:39:18.252441   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:39:18.276005   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:39:18.298954   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:39:18.321901   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:39:18.345593   38680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:39:18.363290   38680 ssh_runner.go:195] Run: openssl version
	I0831 22:39:18.369103   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:39:18.379738   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384052   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384104   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.389812   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:39:18.399006   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:39:18.409817   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414246   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414294   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.419998   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:39:18.429270   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:39:18.439988   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444351   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444394   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.450124   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:39:18.459442   38680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:39:18.463809   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 22:39:18.469261   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 22:39:18.474818   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 22:39:18.480052   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 22:39:18.485805   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 22:39:18.490982   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 22:39:18.496430   38680 kubeadm.go:392] StartCluster: {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:39:18.496538   38680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:39:18.496594   38680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:39:18.540006   38680 cri.go:89] found id: "74033e6de6f78771aa278fb3bf2337b2694d3624100dd7e11f196f8efd688612"
	I0831 22:39:18.540034   38680 cri.go:89] found id: "829e2803166e8b4f563134db85ca290dee0f761c7f98598b5808a7653b837f29"
	I0831 22:39:18.540039   38680 cri.go:89] found id: "ce5a5113d787c6fa00a34027dbed5a4c4a2879f803312b2f06a9b73b7fabb497"
	I0831 22:39:18.540042   38680 cri.go:89] found id: "4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e"
	I0831 22:39:18.540044   38680 cri.go:89] found id: "0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6"
	I0831 22:39:18.540047   38680 cri.go:89] found id: "c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74"
	I0831 22:39:18.540050   38680 cri.go:89] found id: "35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23"
	I0831 22:39:18.540052   38680 cri.go:89] found id: "b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d"
	I0831 22:39:18.540055   38680 cri.go:89] found id: "883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe"
	I0831 22:39:18.540061   38680 cri.go:89] found id: "e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3"
	I0831 22:39:18.540073   38680 cri.go:89] found id: "f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18"
	I0831 22:39:18.540077   38680 cri.go:89] found id: "179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d"
	I0831 22:39:18.540081   38680 cri.go:89] found id: "f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899"
	I0831 22:39:18.540085   38680 cri.go:89] found id: ""
	I0831 22:39:18.540125   38680 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 31 22:48:05 ha-957517 crio[3554]: time="2024-08-31 22:48:05.983507560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144485983482105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0dfdbb3-9359-4271-8062-681447fb68e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:05 ha-957517 crio[3554]: time="2024-08-31 22:48:05.984433773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c931b69-da49-458c-a824-4e6d444699b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:05 ha-957517 crio[3554]: time="2024-08-31 22:48:05.984490465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c931b69-da49-458c-a824-4e6d444699b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:05 ha-957517 crio[3554]: time="2024-08-31 22:48:05.985189929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c931b69-da49-458c-a824-4e6d444699b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.030649815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92b5cb9f-7444-4d48-8b01-0aec93042b68 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.030718629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92b5cb9f-7444-4d48-8b01-0aec93042b68 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.031881222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64381b6c-7f7c-4218-9aa6-05a2e6add8c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.032348579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144486032292357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64381b6c-7f7c-4218-9aa6-05a2e6add8c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.033022695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da319bf5-41f3-48f2-becd-73551a72562d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.033078389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da319bf5-41f3-48f2-becd-73551a72562d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.033643118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da319bf5-41f3-48f2-becd-73551a72562d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.076941759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f75d821-40a4-4f7f-9371-c183011e9608 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.077017045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f75d821-40a4-4f7f-9371-c183011e9608 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.078193289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8385617a-5efa-4646-8727-efa0e5600246 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.079151055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144486079068310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8385617a-5efa-4646-8727-efa0e5600246 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.080012806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdb21c94-29b4-4898-b2f0-7cade60c9f55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.080069354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdb21c94-29b4-4898-b2f0-7cade60c9f55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.080672547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdb21c94-29b4-4898-b2f0-7cade60c9f55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.122639074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1e61b49-a41d-4bee-93d8-d92a61fbd7a5 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.122718899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1e61b49-a41d-4bee-93d8-d92a61fbd7a5 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.123786276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e02c395b-e995-4280-b530-03e624ef85f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.124286829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144486124260573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e02c395b-e995-4280-b530-03e624ef85f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.124997536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d1a6b52-791a-470f-b1a2-235aaf8af8dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.125052584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d1a6b52-791a-470f-b1a2-235aaf8af8dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:06 ha-957517 crio[3554]: time="2024-08-31 22:48:06.125530543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d1a6b52-791a-470f-b1a2-235aaf8af8dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	abf6707edc54b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       4                   0b38c1d912e18       storage-provisioner
	c20207ed446d6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   2                   a9b03c09aefd7       kube-controller-manager-ha-957517
	c9a7461b1cbf9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago       Running             kube-apiserver            3                   3c9af8992e786       kube-apiserver-ha-957517
	b4035603e0bca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Exited              storage-provisioner       3                   0b38c1d912e18       storage-provisioner
	b2858979b6470       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      8 minutes ago       Running             busybox                   1                   3b0c514f045e8       busybox-7dff88458-zdnwd
	f52e663cfc090       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      8 minutes ago       Running             kube-vip                  0                   f30821f6fbe0a       kube-vip-ha-957517
	7c0a265fbf500       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago       Running             kube-proxy                1                   52b89349255db       kube-proxy-xrp64
	e7ce840d2d77d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   1                   79c31968d44d2       coredns-6f6b679f8f-k7rsc
	bc02ceedf7190       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago       Running             kindnet-cni               1                   69681cb02b753       kindnet-tkvsc
	94b314de8f0f5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   1                   92ce9fbbebea8       coredns-6f6b679f8f-pc7gn
	06800a2b4052c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago       Running             kube-scheduler            1                   10c87af2fbc6e       kube-scheduler-ha-957517
	5a9df191ac669       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago       Running             etcd                      1                   bb4ef0b4cc881       etcd-ha-957517
	97642a4900a4f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago       Exited              kube-controller-manager   1                   a9b03c09aefd7       kube-controller-manager-ha-957517
	c57f67ede8a00       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago       Exited              kube-apiserver            2                   3c9af8992e786       kube-apiserver-ha-957517
	dc9ea3c2c4cc4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   16 minutes ago      Exited              busybox                   0                   9f283cd54a11f       busybox-7dff88458-zdnwd
	4a85b32a796fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Exited              coredns                   0                   6e863e5cd9b9c       coredns-6f6b679f8f-k7rsc
	0cfba67fe9abb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Exited              coredns                   0                   298283fc5c9c2       coredns-6f6b679f8f-pc7gn
	35cc0bc2b6243       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    19 minutes ago      Exited              kindnet-cni               0                   37828bdcd38b5       kindnet-tkvsc
	b1a123f41fac1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Exited              kube-proxy                0                   99877abcdf5a7       kube-proxy-xrp64
	e1c6a4e36ddb2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Exited              kube-scheduler            0                   144e67a21ecaa       kube-scheduler-ha-957517
	f3ae732e5626c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Exited              etcd                      0                   960ae9b08a3ee       etcd-ha-957517
	
	
	==> coredns [0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6] <==
	[INFO] 10.244.0.4:36544 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002043312s
	[INFO] 10.244.1.2:34999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003609s
	[INFO] 10.244.1.2:45741 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017294944s
	[INFO] 10.244.1.2:57093 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224681s
	[INFO] 10.244.2.2:49538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000358252s
	[INFO] 10.244.2.2:53732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00185161s
	[INFO] 10.244.2.2:41165 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231402s
	[INFO] 10.244.2.2:60230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118116s
	[INFO] 10.244.2.2:42062 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271609s
	[INFO] 10.244.0.4:49034 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067938s
	[INFO] 10.244.0.4:36002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196492s
	[INFO] 10.244.1.2:54186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124969s
	[INFO] 10.244.1.2:47709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000506218s
	[INFO] 10.244.0.4:54205 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087475s
	[INFO] 10.244.0.4:48802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055159s
	[INFO] 10.244.1.2:46825 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148852s
	[INFO] 10.244.2.2:60523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183145s
	[INFO] 10.244.0.4:53842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116944s
	[INFO] 10.244.0.4:56291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000217808s
	[INFO] 10.244.0.4:53612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00028657s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	
	
	==> coredns [4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e] <==
	[INFO] 10.244.0.4:43334 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001723638s
	[INFO] 10.244.0.4:54010 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080627s
	[INFO] 10.244.0.4:47700 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424459s
	[INFO] 10.244.0.4:50346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070487s
	[INFO] 10.244.0.4:43522 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051146s
	[INFO] 10.244.1.2:60157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099584s
	[INFO] 10.244.1.2:48809 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104515s
	[INFO] 10.244.2.2:37042 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132626s
	[INFO] 10.244.2.2:38343 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117546s
	[INFO] 10.244.2.2:53716 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092804s
	[INFO] 10.244.2.2:59881 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068808s
	[INFO] 10.244.0.4:40431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093051s
	[INFO] 10.244.0.4:39552 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087951s
	[INFO] 10.244.1.2:59301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113713s
	[INFO] 10.244.1.2:40299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210744s
	[INFO] 10.244.1.2:54276 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210063s
	[INFO] 10.244.2.2:34222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307653s
	[INFO] 10.244.2.2:42028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089936s
	[INFO] 10.244.2.2:47927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066426s
	[INFO] 10.244.0.4:39601 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085891s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2124518183]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:39:34.272) (total time: 10001ms):
	Trace[2124518183]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:39:44.273)
	Trace[2124518183]: [10.001560604s] [10.001560604s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e7ce840d2d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40346->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[175734570]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:39:36.470) (total time: 13095ms):
	Trace[175734570]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40346->10.96.0.1:443: read: connection reset by peer 13095ms (22:39:49.565)
	Trace[175734570]: [13.095388766s] [13.095388766s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40346->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49714->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49714->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-957517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_28_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:47:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-957517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 438078db78ee43a0bfe8057c915827a8
	  System UUID:                438078db-78ee-43a0-bfe8-057c915827a8
	  Boot ID:                    e88a2dfb-1351-416c-9b78-5a255e623f1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zdnwd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-6f6b679f8f-k7rsc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-6f6b679f8f-pc7gn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-957517                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-tkvsc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-957517             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-957517    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-xrp64                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-957517             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-957517                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 19m                   kube-proxy       
	  Normal   Starting                 7m56s                 kube-proxy       
	  Normal   NodeAllocatableEnforced  19m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 19m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19m (x2 over 19m)     kubelet          Node ha-957517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x2 over 19m)     kubelet          Node ha-957517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x2 over 19m)     kubelet          Node ha-957517 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                   node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal   NodeReady                19m (x2 over 19m)     kubelet          Node ha-957517 status is now: NodeReady
	  Normal   RegisteredNode           18m                   node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal   RegisteredNode           17m                   node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Warning  ContainerGCFailed        9m45s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             9m6s (x2 over 9m31s)  kubelet          Node ha-957517 status is now: NodeNotReady
	  Normal   RegisteredNode           8m3s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal   RegisteredNode           7m56s                 node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	
	
	Name:               ha-957517-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_29_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:29:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:47:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-957517-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a152f180715c42228f54c353a9e8c1bb
	  System UUID:                a152f180-715c-4222-8f54-c353a9e8c1bb
	  Boot ID:                    53816b5a-a520-4752-84ea-97dfd1bb1a77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cwtrb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-957517-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-bmxh2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-957517-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-957517-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-dvpbk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-957517-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-957517-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m53s                  kube-proxy       
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-957517-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-957517-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-957517-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                    node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  NodeNotReady             15m                    node-controller  Node ha-957517-m02 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m24s (x8 over 8m25s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m24s (x8 over 8m25s)  kubelet          Node ha-957517-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m24s (x7 over 8m25s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m3s                   node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           7m56s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	
	
	Name:               ha-957517-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_30_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:30:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:48:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:46:30 +0000   Sat, 31 Aug 2024 22:40:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:46:30 +0000   Sat, 31 Aug 2024 22:40:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:46:30 +0000   Sat, 31 Aug 2024 22:40:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:46:30 +0000   Sat, 31 Aug 2024 22:40:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    ha-957517-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 886d8b963cd94078ae7cf268a2d07053
	  System UUID:                886d8b96-3cd9-4078-ae7c-f268a2d07053
	  Boot ID:                    b3acc8ba-1831-4f6e-9674-7a390ea5a921
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fkvvp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-957517-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kindnet-jqhdm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-apiserver-ha-957517-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-957517-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-5c5hn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-957517-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-957517-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m56s                  kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-957517-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-957517-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)      kubelet          Node ha-957517-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                    node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal   RegisteredNode           8m3s                   node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal   RegisteredNode           7m56s                  node-controller  Node ha-957517-m03 event: Registered Node ha-957517-m03 in Controller
	  Normal   NodeNotReady             7m23s                  node-controller  Node ha-957517-m03 status is now: NodeNotReady
	  Normal   NodeAllocatableEnforced  7m11s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 7m11s (x3 over 7m11s)  kubelet          Node ha-957517-m03 has been rebooted, boot id: b3acc8ba-1831-4f6e-9674-7a390ea5a921
	  Normal   NodeHasSufficientMemory  7m11s (x4 over 7m11s)  kubelet          Node ha-957517-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m11s (x4 over 7m11s)  kubelet          Node ha-957517-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m11s (x4 over 7m11s)  kubelet          Node ha-957517-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             7m11s                  kubelet          Node ha-957517-m03 status is now: NodeNotReady
	  Normal   NodeReady                7m11s (x2 over 7m11s)  kubelet          Node ha-957517-m03 status is now: NodeReady
	  Normal   Starting                 7m11s                  kubelet          Starting kubelet.
	
	
	Name:               ha-957517-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_31_41_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:31:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:35:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-957517-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08b180ad339e4d19acb3ea0e7328dc00
	  System UUID:                08b180ad-339e-4d19-acb3-ea0e7328dc00
	  Boot ID:                    eb027e2a-5c22-4721-9b4b-8b9696ccec09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2t9r8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-proxy-6f6xd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  RegisteredNode           16m                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-957517-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-957517-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m3s               node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  RegisteredNode           7m56s              node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeNotReady             7m23s              node-controller  Node ha-957517-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Aug31 22:28] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.064763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057170] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.193531] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.118523] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.278233] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.003192] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.620544] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058441] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.958169] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083987] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.815006] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.616164] kauditd_printk_skb: 38 callbacks suppressed
	[Aug31 22:29] kauditd_printk_skb: 24 callbacks suppressed
	[Aug31 22:39] systemd-fstab-generator[3479]: Ignoring "noauto" option for root device
	[  +0.149859] systemd-fstab-generator[3491]: Ignoring "noauto" option for root device
	[  +0.177091] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +0.139553] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	[  +0.274919] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.761462] systemd-fstab-generator[3640]: Ignoring "noauto" option for root device
	[  +3.640979] kauditd_printk_skb: 122 callbacks suppressed
	[ +14.497757] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.070411] kauditd_printk_skb: 1 callbacks suppressed
	[Aug31 22:40] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.831333] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321] <==
	{"level":"warn","ts":"2024-08-31T22:40:50.831025Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"3a30a86b86970552","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:40:50.833058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"3a30a86b86970552","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:40:50.905968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"3a30a86b86970552","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:40:50.919280Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5527995f6263874a","from":"5527995f6263874a","remote-peer-id":"3a30a86b86970552","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-31T22:40:51.795169Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.26:2380/version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:40:51.795233Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:40:55.176718Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a30a86b86970552","rtt":"0s","error":"dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:40:55.176784Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a30a86b86970552","rtt":"0s","error":"dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:40:55.797237Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.26:2380/version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:40:55.797361Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:40:59.799096Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.26:2380/version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:40:59.799207Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:41:00.177510Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a30a86b86970552","rtt":"0s","error":"dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:41:00.177544Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a30a86b86970552","rtt":"0s","error":"dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:41:03.800945Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.26:2380/version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:41:03.801013Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a30a86b86970552","error":"Get \"https://192.168.39.26:2380/version\": dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:41:05.178718Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a30a86b86970552","rtt":"0s","error":"dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-31T22:41:05.178848Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a30a86b86970552","rtt":"0s","error":"dial tcp 192.168.39.26:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-31T22:41:05.334496Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.334841Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.353728Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.373785Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5527995f6263874a","to":"3a30a86b86970552","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-31T22:41:05.373849Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.383312Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5527995f6263874a","to":"3a30a86b86970552","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-31T22:41:05.383462Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	
	
	==> etcd [f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18] <==
	2024/08/31 22:37:44 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/31 22:37:44 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-31T22:37:44.867346Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T22:37:44.867485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-31T22:37:44.867596Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"5527995f6263874a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-31T22:37:44.867783Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.867840Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.867900Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868053Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868113Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868180Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868211Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868234Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868277Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868325Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868480Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868535Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868564Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868593Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.871798Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"warn","ts":"2024-08-31T22:37:44.871868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.676535677s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-31T22:37:44.871944Z","caller":"traceutil/trace.go:171","msg":"trace[1670798017] range","detail":"{range_begin:; range_end:; }","duration":"8.676624743s","start":"2024-08-31T22:37:36.195311Z","end":"2024-08-31T22:37:44.871936Z","steps":["trace[1670798017] 'agreement among raft nodes before linearized reading'  (duration: 8.676534851s)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:37:44.871994Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-08-31T22:37:44.872025Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-957517","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	{"level":"error","ts":"2024-08-31T22:37:44.872015Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 22:48:06 up 20 min,  0 users,  load average: 0.00, 0.23, 0.30
	Linux ha-957517 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23] <==
	I0831 22:37:11.965462       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:21.965624       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:21.965663       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:37:21.965778       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:21.965799       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:21.965887       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:21.965908       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:21.965961       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:21.965983       1 main.go:299] handling current node
	I0831 22:37:31.963598       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:31.963696       1 main.go:299] handling current node
	I0831 22:37:31.963726       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:31.963744       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:37:31.963981       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:31.964015       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:31.964101       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:31.964130       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:41.972192       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:41.972307       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:41.972549       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:41.972573       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:41.972674       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:41.972700       1 main.go:299] handling current node
	I0831 22:37:41.972724       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:41.972729       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463] <==
	I0831 22:47:35.768580       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:47:45.771303       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:47:45.771623       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:47:45.771909       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:47:45.772023       1 main.go:299] handling current node
	I0831 22:47:45.772086       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:47:45.772115       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:47:45.772225       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:47:45.772246       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:47:55.767759       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:47:55.767870       1 main.go:299] handling current node
	I0831 22:47:55.767898       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:47:55.767916       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:47:55.768053       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:47:55.768073       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:47:55.768150       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:47:55.768184       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:48:05.761732       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:48:05.761788       1 main.go:299] handling current node
	I0831 22:48:05.761807       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:48:05.761815       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:48:05.762046       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:48:05.762056       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:48:05.762175       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:48:05.762185       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66] <==
	I0831 22:39:24.893970       1 options.go:228] external host was not specified, using 192.168.39.137
	I0831 22:39:24.900043       1 server.go:142] Version: v1.31.0
	I0831 22:39:24.900146       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:39:25.753461       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0831 22:39:25.780467       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 22:39:25.782476       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0831 22:39:25.782509       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0831 22:39:25.782760       1 instance.go:232] Using reconciler: lease
	W0831 22:39:45.750450       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0831 22:39:45.750634       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0831 22:39:45.784017       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7] <==
	I0831 22:40:05.374239       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0831 22:40:05.472696       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 22:40:05.472791       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 22:40:05.472859       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 22:40:05.473055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 22:40:05.477475       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 22:40:05.478969       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0831 22:40:05.484658       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.26 192.168.39.61]
	I0831 22:40:05.487470       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 22:40:05.493626       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 22:40:05.493664       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 22:40:05.493723       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 22:40:05.493771       1 aggregator.go:171] initial CRD sync complete...
	I0831 22:40:05.493807       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 22:40:05.493831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 22:40:05.493854       1 cache.go:39] Caches are synced for autoregister controller
	I0831 22:40:05.511569       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 22:40:05.511608       1 policy_source.go:224] refreshing policies
	I0831 22:40:05.563317       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 22:40:05.586662       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 22:40:05.594886       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0831 22:40:05.601068       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0831 22:40:06.390155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0831 22:40:06.716212       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137 192.168.39.26 192.168.39.61]
	W0831 22:40:16.715522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137 192.168.39.61]
	
	
	==> kube-controller-manager [97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef] <==
	I0831 22:39:25.523610       1 serving.go:386] Generated self-signed cert in-memory
	I0831 22:39:26.109857       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0831 22:39:26.110071       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:39:26.113096       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 22:39:26.113292       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 22:39:26.113913       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0831 22:39:26.114016       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0831 22:39:46.790111       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.137:8443/healthz\": dial tcp 192.168.39.137:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7] <==
	I0831 22:40:32.592176       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-ls74x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-ls74x\": the object has been modified; please apply your changes to the latest version and try again"
	I0831 22:40:32.592895       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"dde41293-c35f-4bff-ba84-243ea97afdd0", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-ls74x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-ls74x": the object has been modified; please apply your changes to the latest version and try again
	I0831 22:40:32.646117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="68.834325ms"
	I0831 22:40:32.646474       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="179.62µs"
	I0831 22:40:43.934134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:40:43.934597       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:43.962824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:40:43.966216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:44.007270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.260655ms"
	I0831 22:40:44.007631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.483µs"
	I0831 22:40:45.396685       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:49.231316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:50.698795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:40:55.373161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:55.390183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:55.480686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:40:55.587243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:56.212165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.842µs"
	I0831 22:40:59.313150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:41:16.680676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.88817ms"
	I0831 22:41:16.681970       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.907µs"
	I0831 22:41:25.473132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:45:21.065296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517"
	I0831 22:45:56.005326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:46:30.597281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	
	
	==> kube-proxy [7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:39:26.911344       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:29.982883       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:33.055492       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:39.201473       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:51.487041       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0831 22:40:10.488958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E0831 22:40:10.489188       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:40:10.537261       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:40:10.537354       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:40:10.537501       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:40:10.542452       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:40:10.542992       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:40:10.543052       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:40:10.545123       1 config.go:197] "Starting service config controller"
	I0831 22:40:10.545206       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:40:10.545249       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:40:10.545277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:40:10.546250       1 config.go:326] "Starting node config controller"
	I0831 22:40:10.546438       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:40:10.646348       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:40:10.646448       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:40:10.646525       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d] <==
	E0831 22:36:21.374054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:21.374474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:21.374644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:29.181771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0831 22:36:29.181910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:29.181978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:29.182082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:29.182138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0831 22:36:29.183726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.022821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.022959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.023030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.023072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.023128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.023169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:56.381866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:56.382427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:56.382547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:56.382584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:05.598350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:05.598532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:33.246415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:33.246482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:39.390329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:39.390566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb] <==
	W0831 22:39:55.703786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.137:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:55.703902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.137:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:39:55.733952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.137:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:55.734020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.137:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:39:56.221045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.137:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:56.221132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.137:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:39:56.251131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.137:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:56.251271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.137:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:02.013299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.137:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:40:02.013444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.137:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:02.529215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.137:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:40:02.529971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.137:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:02.864927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.137:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:40:02.864971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.137:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:05.437244       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:40:05.437838       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:40:05.437671       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:40:05.438221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:40:05.437720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:40:05.438340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:40:05.437777       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:40:05.438466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:40:05.438193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:40:05.438537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:40:23.801533       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3] <==
	E0831 22:31:40.726228       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2t9r8\": pod kindnet-2t9r8 is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-2t9r8"
	I0831 22:31:40.726253       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2t9r8" node="ha-957517-m04"
	E0831 22:31:40.731781       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:31:40.731866       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3457f0a0-fd3b-4e40-819f-9d57c29036e6(kube-system/kindnet-mljxh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mljxh"
	E0831 22:31:40.731884       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-mljxh"
	I0831 22:31:40.731900       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:37:32.346967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0831 22:37:32.516236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0831 22:37:35.083345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0831 22:37:35.963186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0831 22:37:36.540206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0831 22:37:36.648180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0831 22:37:37.049684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:38.229658       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0831 22:37:38.502251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0831 22:37:38.836806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:39.474109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:39.769349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0831 22:37:41.935022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0831 22:37:42.328066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:43.701831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	I0831 22:37:44.800445       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0831 22:37:44.800584       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0831 22:37:44.800769       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0831 22:37:44.803420       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 31 22:46:31 ha-957517 kubelet[1303]: E0831 22:46:31.690783    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144391689892263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:31 ha-957517 kubelet[1303]: E0831 22:46:31.691663    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144391689892263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:41 ha-957517 kubelet[1303]: E0831 22:46:41.701159    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144401693327224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:41 ha-957517 kubelet[1303]: E0831 22:46:41.701537    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144401693327224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:51 ha-957517 kubelet[1303]: E0831 22:46:51.703551    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144411703058049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:51 ha-957517 kubelet[1303]: E0831 22:46:51.703588    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144411703058049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:01 ha-957517 kubelet[1303]: E0831 22:47:01.706417    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144421705733229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:01 ha-957517 kubelet[1303]: E0831 22:47:01.706462    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144421705733229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:11 ha-957517 kubelet[1303]: E0831 22:47:11.709511    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144431708085741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:11 ha-957517 kubelet[1303]: E0831 22:47:11.710041    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144431708085741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:21 ha-957517 kubelet[1303]: E0831 22:47:21.414553    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 22:47:21 ha-957517 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 22:47:21 ha-957517 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 22:47:21 ha-957517 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 22:47:21 ha-957517 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 22:47:21 ha-957517 kubelet[1303]: E0831 22:47:21.714614    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144441714228907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:21 ha-957517 kubelet[1303]: E0831 22:47:21.714638    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144441714228907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:31 ha-957517 kubelet[1303]: E0831 22:47:31.716606    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144451715648191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:31 ha-957517 kubelet[1303]: E0831 22:47:31.716666    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144451715648191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:41 ha-957517 kubelet[1303]: E0831 22:47:41.724156    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144461719202894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:41 ha-957517 kubelet[1303]: E0831 22:47:41.724259    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144461719202894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:51 ha-957517 kubelet[1303]: E0831 22:47:51.725648    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144471725319807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:51 ha-957517 kubelet[1303]: E0831 22:47:51.725988    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144471725319807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:48:01 ha-957517 kubelet[1303]: E0831 22:48:01.728129    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144481727545278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:48:01 ha-957517 kubelet[1303]: E0831 22:48:01.729741    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144481727545278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 22:48:05.692161   41102 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18943-13149/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-957517 -n ha-957517
helpers_test.go:262: (dbg) Run:  kubectl --context ha-957517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: kube-apiserver-ha-957517-m03
helpers_test.go:275: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context ha-957517 describe pod kube-apiserver-ha-957517-m03
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context ha-957517 describe pod kube-apiserver-ha-957517-m03: exit status 1 (97.622935ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-apiserver-ha-957517-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context ha-957517 describe pod kube-apiserver-ha-957517-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (745.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-957517 node delete m03 -v=7 --alsologtostderr: (5.623935791s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 7 (459.858592ms)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:48:13.604554   41359 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:48:13.604659   41359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:48:13.604668   41359 out.go:358] Setting ErrFile to fd 2...
	I0831 22:48:13.604672   41359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:48:13.604890   41359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:48:13.605051   41359 out.go:352] Setting JSON to false
	I0831 22:48:13.605080   41359 mustload.go:65] Loading cluster: ha-957517
	I0831 22:48:13.605129   41359 notify.go:220] Checking for updates...
	I0831 22:48:13.605612   41359 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:48:13.605633   41359 status.go:255] checking status of ha-957517 ...
	I0831 22:48:13.606047   41359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:13.606092   41359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:13.624542   41359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38385
	I0831 22:48:13.624910   41359 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:13.625443   41359 main.go:141] libmachine: Using API Version  1
	I0831 22:48:13.625482   41359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:13.625845   41359 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:13.626031   41359 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:48:13.627777   41359 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:48:13.627794   41359 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:48:13.628058   41359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:13.628118   41359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:13.643437   41359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0831 22:48:13.643843   41359 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:13.644276   41359 main.go:141] libmachine: Using API Version  1
	I0831 22:48:13.644323   41359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:13.644604   41359 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:13.644789   41359 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:48:13.647687   41359 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:48:13.648126   41359 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:48:13.648160   41359 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:48:13.648305   41359 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:48:13.648608   41359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:13.648654   41359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:13.664843   41359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0831 22:48:13.665272   41359 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:13.665731   41359 main.go:141] libmachine: Using API Version  1
	I0831 22:48:13.665753   41359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:13.666036   41359 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:13.666236   41359 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:48:13.666423   41359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:48:13.666459   41359 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:48:13.669032   41359 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:48:13.669465   41359 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:48:13.669490   41359 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:48:13.669611   41359 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:48:13.669782   41359 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:48:13.669916   41359 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:48:13.670063   41359 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:48:13.746776   41359 ssh_runner.go:195] Run: systemctl --version
	I0831 22:48:13.753210   41359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:48:13.768562   41359 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:48:13.768595   41359 api_server.go:166] Checking apiserver status ...
	I0831 22:48:13.768633   41359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:48:13.783104   41359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4838/cgroup
	W0831 22:48:13.793204   41359 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4838/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:48:13.793273   41359 ssh_runner.go:195] Run: ls
	I0831 22:48:13.798048   41359 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:48:13.802303   41359 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:48:13.802325   41359 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:48:13.802334   41359 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:48:13.802350   41359 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:48:13.802632   41359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:13.802678   41359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:13.817306   41359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0831 22:48:13.817704   41359 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:13.818135   41359 main.go:141] libmachine: Using API Version  1
	I0831 22:48:13.818153   41359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:13.818430   41359 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:13.818621   41359 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:48:13.820038   41359 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:48:13.820060   41359 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:48:13.820359   41359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:13.820392   41359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:13.835775   41359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0831 22:48:13.836221   41359 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:13.836666   41359 main.go:141] libmachine: Using API Version  1
	I0831 22:48:13.836684   41359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:13.836968   41359 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:13.837159   41359 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:48:13.840029   41359 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:48:13.840474   41359 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:39:29 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:48:13.840500   41359 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:48:13.840641   41359 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:48:13.840918   41359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:13.840952   41359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:13.855860   41359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0831 22:48:13.856210   41359 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:13.856737   41359 main.go:141] libmachine: Using API Version  1
	I0831 22:48:13.856760   41359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:13.857123   41359 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:13.857329   41359 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:48:13.857479   41359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:48:13.857498   41359 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:48:13.860050   41359 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:48:13.860458   41359 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:39:29 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:48:13.860495   41359 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:48:13.860624   41359 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:48:13.860803   41359 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:48:13.860963   41359 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:48:13.861131   41359 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:48:13.944083   41359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:48:13.967276   41359 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:48:13.967301   41359 api_server.go:166] Checking apiserver status ...
	I0831 22:48:13.967347   41359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:48:13.984200   41359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	W0831 22:48:13.995124   41359 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:48:13.995173   41359 ssh_runner.go:195] Run: ls
	I0831 22:48:13.999495   41359 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:48:14.003350   41359 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0831 22:48:14.003374   41359 status.go:422] ha-957517-m02 apiserver status = Running (err=<nil>)
	I0831 22:48:14.003385   41359 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:48:14.003409   41359 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:48:14.003687   41359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:14.003729   41359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:14.019657   41359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0831 22:48:14.020086   41359 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:14.020566   41359 main.go:141] libmachine: Using API Version  1
	I0831 22:48:14.020587   41359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:14.020909   41359 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:14.021108   41359 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:48:14.022694   41359 status.go:330] ha-957517-m04 host status = "Stopped" (err=<nil>)
	I0831 22:48:14.022705   41359 status.go:343] host is not running, skipping remaining checks
	I0831 22:48:14.022711   41359 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-957517 -n ha-957517
helpers_test.go:245: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p ha-957517 logs -n 25: (1.694039972s)
helpers_test.go:253: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m04 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp testdata/cp-test.txt                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517:/home/docker/cp-test_ha-957517-m04_ha-957517.txt                       |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517 sudo cat                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517.txt                                 |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03:/home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m03 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-957517 node stop m02 -v=7                                                     | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-957517 node start m02 -v=7                                                    | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-957517 -v=7                                                           | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-957517 -v=7                                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-957517 --wait=true -v=7                                                    | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-957517                                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC |                     |
	| node    | ha-957517 node delete m03 -v=7                                                   | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:37:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:37:43.980883   38680 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:37:43.981003   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981012   38680 out.go:358] Setting ErrFile to fd 2...
	I0831 22:37:43.981017   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981185   38680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:37:43.981743   38680 out.go:352] Setting JSON to false
	I0831 22:37:43.982668   38680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4811,"bootTime":1725139053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:37:43.982721   38680 start.go:139] virtualization: kvm guest
	I0831 22:37:43.985184   38680 out.go:177] * [ha-957517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:37:43.986509   38680 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:37:43.986515   38680 notify.go:220] Checking for updates...
	I0831 22:37:43.989086   38680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:37:43.990438   38680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:37:43.991747   38680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:37:43.992969   38680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:37:43.994015   38680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:37:43.995541   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:43.995622   38680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:37:43.995993   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:43.996068   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.011776   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0831 22:37:44.012162   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.012650   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.012667   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.012988   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.013198   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.046996   38680 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 22:37:44.048362   38680 start.go:297] selected driver: kvm2
	I0831 22:37:44.048377   38680 start.go:901] validating driver "kvm2" against &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.048522   38680 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:37:44.048853   38680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.048953   38680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:37:44.063722   38680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:37:44.064393   38680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:37:44.064470   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:37:44.064486   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:37:44.064562   38680 start.go:340] cluster config:
	{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.064759   38680 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.066648   38680 out.go:177] * Starting "ha-957517" primary control-plane node in "ha-957517" cluster
	I0831 22:37:44.067887   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:37:44.067918   38680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:37:44.067925   38680 cache.go:56] Caching tarball of preloaded images
	I0831 22:37:44.067991   38680 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:37:44.068000   38680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:37:44.068132   38680 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:37:44.068445   38680 start.go:360] acquireMachinesLock for ha-957517: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:37:44.068502   38680 start.go:364] duration metric: took 31.59µs to acquireMachinesLock for "ha-957517"
	I0831 22:37:44.068521   38680 start.go:96] Skipping create...Using existing machine configuration
	I0831 22:37:44.068531   38680 fix.go:54] fixHost starting: 
	I0831 22:37:44.068854   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:44.068905   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.082801   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0831 22:37:44.083254   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.083798   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.083819   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.084088   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.084260   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.084412   38680 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:37:44.086212   38680 fix.go:112] recreateIfNeeded on ha-957517: state=Running err=<nil>
	W0831 22:37:44.086242   38680 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 22:37:44.088190   38680 out.go:177] * Updating the running kvm2 "ha-957517" VM ...
	I0831 22:37:44.089385   38680 machine.go:93] provisionDockerMachine start ...
	I0831 22:37:44.089401   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.089624   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.092086   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092623   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.092649   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092785   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.092955   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093100   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093214   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.093355   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.093526   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.093536   38680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:37:44.200556   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.200584   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.200847   38680 buildroot.go:166] provisioning hostname "ha-957517"
	I0831 22:37:44.200870   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.201116   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.203857   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204273   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.204297   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204424   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.204626   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204766   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204881   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.205020   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.205217   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.205231   38680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517 && echo "ha-957517" | sudo tee /etc/hostname
	I0831 22:37:44.330466   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.330490   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.333462   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333829   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.333868   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333997   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.334236   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334427   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334627   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.334794   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.334953   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.334968   38680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:37:44.440566   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:37:44.440594   38680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:37:44.440626   38680 buildroot.go:174] setting up certificates
	I0831 22:37:44.440636   38680 provision.go:84] configureAuth start
	I0831 22:37:44.440648   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.440934   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:37:44.443531   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.443928   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.443954   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.444251   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.446892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447301   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.447348   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447478   38680 provision.go:143] copyHostCerts
	I0831 22:37:44.447502   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447538   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:37:44.447557   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447632   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:37:44.447757   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447782   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:37:44.447790   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447831   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:37:44.447904   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447927   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:37:44.447935   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447966   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:37:44.448033   38680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517 san=[127.0.0.1 192.168.39.137 ha-957517 localhost minikube]
	I0831 22:37:44.517123   38680 provision.go:177] copyRemoteCerts
	I0831 22:37:44.517176   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:37:44.517197   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.519747   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520161   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.520195   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520321   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.520494   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.520656   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.520777   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:37:44.602311   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:37:44.602376   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:37:44.631362   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:37:44.631445   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0831 22:37:44.663123   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:37:44.663190   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:37:44.691526   38680 provision.go:87] duration metric: took 250.877979ms to configureAuth
	I0831 22:37:44.691553   38680 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:37:44.691854   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:44.691944   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.694465   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.694868   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.694892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.695159   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.695350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695512   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695618   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.695764   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.695955   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.695971   38680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:39:15.634828   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:39:15.634858   38680 machine.go:96] duration metric: took 1m31.54546155s to provisionDockerMachine
	I0831 22:39:15.634870   38680 start.go:293] postStartSetup for "ha-957517" (driver="kvm2")
	I0831 22:39:15.634881   38680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:39:15.634896   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.635202   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:39:15.635227   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.638236   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638748   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.638776   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638909   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.639093   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.639293   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.639429   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.722855   38680 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:39:15.727014   38680 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:39:15.727034   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:39:15.727097   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:39:15.727199   38680 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:39:15.727212   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:39:15.727302   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:39:15.736663   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:15.760234   38680 start.go:296] duration metric: took 125.353074ms for postStartSetup
	I0831 22:39:15.760279   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.760559   38680 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0831 22:39:15.760588   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.763201   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763613   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.763633   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763770   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.763954   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.764091   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.764216   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	W0831 22:39:15.846286   38680 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0831 22:39:15.846323   38680 fix.go:56] duration metric: took 1m31.777792266s for fixHost
	I0831 22:39:15.846350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.848916   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849334   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.849365   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849543   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.849722   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.849879   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.850019   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.850187   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:39:15.850351   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:39:15.850361   38680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:39:15.951938   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143955.913478582
	
	I0831 22:39:15.951960   38680 fix.go:216] guest clock: 1725143955.913478582
	I0831 22:39:15.951967   38680 fix.go:229] Guest: 2024-08-31 22:39:15.913478582 +0000 UTC Remote: 2024-08-31 22:39:15.846332814 +0000 UTC m=+91.900956878 (delta=67.145768ms)
	I0831 22:39:15.951984   38680 fix.go:200] guest clock delta is within tolerance: 67.145768ms
	I0831 22:39:15.951989   38680 start.go:83] releasing machines lock for "ha-957517", held for 1m31.883475675s
	I0831 22:39:15.952012   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.952276   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:15.955057   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955473   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.955502   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955634   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956283   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956455   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956567   38680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:39:15.956617   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.956632   38680 ssh_runner.go:195] Run: cat /version.json
	I0831 22:39:15.956655   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.959097   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959114   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959529   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959554   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959578   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959597   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959696   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959871   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959900   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960042   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960055   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960225   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960234   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.960339   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:16.064767   38680 ssh_runner.go:195] Run: systemctl --version
	I0831 22:39:16.070863   38680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:39:16.230376   38680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:39:16.236729   38680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:39:16.236783   38680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:39:16.245939   38680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 22:39:16.245960   38680 start.go:495] detecting cgroup driver to use...
	I0831 22:39:16.246006   38680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:39:16.261896   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:39:16.276357   38680 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:39:16.276410   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:39:16.289922   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:39:16.302913   38680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:39:16.451294   38680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:39:16.596005   38680 docker.go:233] disabling docker service ...
	I0831 22:39:16.596062   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:39:16.612423   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:39:16.625984   38680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:39:16.769630   38680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:39:16.915592   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:39:16.929353   38680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:39:16.949875   38680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:39:16.949927   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.960342   38680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:39:16.960402   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.970745   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.980972   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.991090   38680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:39:17.001258   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.011096   38680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.021887   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.031682   38680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:39:17.040513   38680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:39:17.049301   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.194385   38680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:39:17.428315   38680 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:39:17.428408   38680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:39:17.433515   38680 start.go:563] Will wait 60s for crictl version
	I0831 22:39:17.433556   38680 ssh_runner.go:195] Run: which crictl
	I0831 22:39:17.437499   38680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:39:17.479960   38680 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:39:17.480026   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.515314   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.547505   38680 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:39:17.548632   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:17.550955   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551269   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:17.551296   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551521   38680 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:39:17.556237   38680 kubeadm.go:883] updating cluster {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:39:17.556363   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:39:17.556415   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.600319   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.600339   38680 crio.go:433] Images already preloaded, skipping extraction
	I0831 22:39:17.600382   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.634386   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.634406   38680 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:39:17.634416   38680 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.31.0 crio true true} ...
	I0831 22:39:17.634526   38680 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:39:17.634618   38680 ssh_runner.go:195] Run: crio config
	I0831 22:39:17.682178   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:39:17.682203   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:39:17.682220   38680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:39:17.682240   38680 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957517 NodeName:ha-957517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:39:17.682375   38680 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-957517"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:39:17.682399   38680 kube-vip.go:115] generating kube-vip config ...
	I0831 22:39:17.682439   38680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:39:17.694650   38680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:39:17.694772   38680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:39:17.694843   38680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:39:17.705040   38680 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:39:17.705103   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 22:39:17.714471   38680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0831 22:39:17.733900   38680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:39:17.754099   38680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0831 22:39:17.773312   38680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:39:17.792847   38680 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:39:17.797963   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.955439   38680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:39:17.970324   38680 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.137
	I0831 22:39:17.970348   38680 certs.go:194] generating shared ca certs ...
	I0831 22:39:17.970363   38680 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:17.970501   38680 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:39:17.970573   38680 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:39:17.970590   38680 certs.go:256] generating profile certs ...
	I0831 22:39:17.970697   38680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:39:17.970732   38680 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56
	I0831 22:39:17.970747   38680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.26 192.168.39.254]
	I0831 22:39:18.083143   38680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 ...
	I0831 22:39:18.083186   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56: {Name:mk489dd79b841ee44fa8d66455c5fed8039b89dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083399   38680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 ...
	I0831 22:39:18.083417   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56: {Name:mkbcff44832282605e436763bcf5c32528ce79a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083523   38680 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:39:18.083680   38680 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:39:18.083806   38680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:39:18.083821   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:39:18.083834   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:39:18.083847   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:39:18.083860   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:39:18.083873   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:39:18.083885   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:39:18.083901   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:39:18.083913   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:39:18.083956   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:39:18.083983   38680 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:39:18.083992   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:39:18.084015   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:39:18.084037   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:39:18.084058   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:39:18.084099   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:18.084124   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.084138   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.084150   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.084726   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:39:18.111136   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:39:18.134120   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:39:18.157775   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:39:18.181362   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0831 22:39:18.205148   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:39:18.229117   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:39:18.252441   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:39:18.276005   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:39:18.298954   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:39:18.321901   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:39:18.345593   38680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:39:18.363290   38680 ssh_runner.go:195] Run: openssl version
	I0831 22:39:18.369103   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:39:18.379738   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384052   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384104   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.389812   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:39:18.399006   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:39:18.409817   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414246   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414294   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.419998   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:39:18.429270   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:39:18.439988   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444351   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444394   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.450124   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:39:18.459442   38680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:39:18.463809   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 22:39:18.469261   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 22:39:18.474818   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 22:39:18.480052   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 22:39:18.485805   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 22:39:18.490982   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 22:39:18.496430   38680 kubeadm.go:392] StartCluster: {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:39:18.496538   38680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:39:18.496594   38680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:39:18.540006   38680 cri.go:89] found id: "74033e6de6f78771aa278fb3bf2337b2694d3624100dd7e11f196f8efd688612"
	I0831 22:39:18.540034   38680 cri.go:89] found id: "829e2803166e8b4f563134db85ca290dee0f761c7f98598b5808a7653b837f29"
	I0831 22:39:18.540039   38680 cri.go:89] found id: "ce5a5113d787c6fa00a34027dbed5a4c4a2879f803312b2f06a9b73b7fabb497"
	I0831 22:39:18.540042   38680 cri.go:89] found id: "4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e"
	I0831 22:39:18.540044   38680 cri.go:89] found id: "0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6"
	I0831 22:39:18.540047   38680 cri.go:89] found id: "c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74"
	I0831 22:39:18.540050   38680 cri.go:89] found id: "35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23"
	I0831 22:39:18.540052   38680 cri.go:89] found id: "b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d"
	I0831 22:39:18.540055   38680 cri.go:89] found id: "883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe"
	I0831 22:39:18.540061   38680 cri.go:89] found id: "e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3"
	I0831 22:39:18.540073   38680 cri.go:89] found id: "f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18"
	I0831 22:39:18.540077   38680 cri.go:89] found id: "179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d"
	I0831 22:39:18.540081   38680 cri.go:89] found id: "f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899"
	I0831 22:39:18.540085   38680 cri.go:89] found id: ""
	I0831 22:39:18.540125   38680 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.606931931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144494606906831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a137fd9f-a649-48f5-a09b-939e545ae106 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.607981733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e705164-4e72-43f9-ba6b-6eec5cc1254f name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.608036475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e705164-4e72-43f9-ba6b-6eec5cc1254f name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.608564434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e705164-4e72-43f9-ba6b-6eec5cc1254f name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.650267315Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=561276e3-3b2e-433b-af4a-7171af4388a4 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.650341374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=561276e3-3b2e-433b-af4a-7171af4388a4 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.651589119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d178f4b-97b9-48d8-96be-a7d4b5052424 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.652033517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144494652010399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d178f4b-97b9-48d8-96be-a7d4b5052424 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.652914696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b4e7062-7f86-4f3d-b6ed-92a108963f42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.652977564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b4e7062-7f86-4f3d-b6ed-92a108963f42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.653655725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b4e7062-7f86-4f3d-b6ed-92a108963f42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.703095888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10f58d41-ec22-4bb3-b759-14221a598957 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.703167042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10f58d41-ec22-4bb3-b759-14221a598957 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.704293537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2d8de15-2445-420a-b68a-0083dd7f269c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.704912819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144494704887962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2d8de15-2445-420a-b68a-0083dd7f269c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.705600878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64d4349d-ac9e-4f8a-aae5-a15f7a01775d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.705676036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64d4349d-ac9e-4f8a-aae5-a15f7a01775d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.706110101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64d4349d-ac9e-4f8a-aae5-a15f7a01775d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.750349771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d770573f-3bbb-40a3-9779-47bae8cf7fb3 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.750556561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d770573f-3bbb-40a3-9779-47bae8cf7fb3 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.751866165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=338c8d6e-f2e5-43f1-b4cf-b0ca4c7b69fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.752357633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144494752333990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=338c8d6e-f2e5-43f1-b4cf-b0ca4c7b69fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.752995344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb59670a-8ab5-4ee6-b523-fea832e9e480 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.753070081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb59670a-8ab5-4ee6-b523-fea832e9e480 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:48:14 ha-957517 crio[3554]: time="2024-08-31 22:48:14.753539145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abf6707edc54b2a15cb047df782ad8eb4424904c49faf00e2e08d1b0c2d993f2,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725144042377825489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725144003383999943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4035603e0bcaed66181509c41d0abcbd154ab5239268bad513b3481c9e12011,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725143999371721429,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ce840d2
d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b
324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725143964102040766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb59670a-8ab5-4ee6-b523-fea832e9e480 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	abf6707edc54b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       4                   0b38c1d912e18       storage-provisioner
	c20207ed446d6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago       Running             kube-controller-manager   2                   a9b03c09aefd7       kube-controller-manager-ha-957517
	c9a7461b1cbf9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago       Running             kube-apiserver            3                   3c9af8992e786       kube-apiserver-ha-957517
	b4035603e0bca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Exited              storage-provisioner       3                   0b38c1d912e18       storage-provisioner
	b2858979b6470       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      8 minutes ago       Running             busybox                   1                   3b0c514f045e8       busybox-7dff88458-zdnwd
	f52e663cfc090       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      8 minutes ago       Running             kube-vip                  0                   f30821f6fbe0a       kube-vip-ha-957517
	7c0a265fbf500       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago       Running             kube-proxy                1                   52b89349255db       kube-proxy-xrp64
	e7ce840d2d77d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   1                   79c31968d44d2       coredns-6f6b679f8f-k7rsc
	bc02ceedf7190       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago       Running             kindnet-cni               1                   69681cb02b753       kindnet-tkvsc
	94b314de8f0f5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   1                   92ce9fbbebea8       coredns-6f6b679f8f-pc7gn
	06800a2b4052c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago       Running             kube-scheduler            1                   10c87af2fbc6e       kube-scheduler-ha-957517
	5a9df191ac669       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago       Running             etcd                      1                   bb4ef0b4cc881       etcd-ha-957517
	97642a4900a4f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago       Exited              kube-controller-manager   1                   a9b03c09aefd7       kube-controller-manager-ha-957517
	c57f67ede8a00       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago       Exited              kube-apiserver            2                   3c9af8992e786       kube-apiserver-ha-957517
	dc9ea3c2c4cc4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   17 minutes ago      Exited              busybox                   0                   9f283cd54a11f       busybox-7dff88458-zdnwd
	4a85b32a796fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Exited              coredns                   0                   6e863e5cd9b9c       coredns-6f6b679f8f-k7rsc
	0cfba67fe9abb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Exited              coredns                   0                   298283fc5c9c2       coredns-6f6b679f8f-pc7gn
	35cc0bc2b6243       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    19 minutes ago      Exited              kindnet-cni               0                   37828bdcd38b5       kindnet-tkvsc
	b1a123f41fac1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Exited              kube-proxy                0                   99877abcdf5a7       kube-proxy-xrp64
	e1c6a4e36ddb2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Exited              kube-scheduler            0                   144e67a21ecaa       kube-scheduler-ha-957517
	f3ae732e5626c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Exited              etcd                      0                   960ae9b08a3ee       etcd-ha-957517
	
	
	==> coredns [0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6] <==
	[INFO] 10.244.0.4:36544 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002043312s
	[INFO] 10.244.1.2:34999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003609s
	[INFO] 10.244.1.2:45741 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017294944s
	[INFO] 10.244.1.2:57093 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224681s
	[INFO] 10.244.2.2:49538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000358252s
	[INFO] 10.244.2.2:53732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00185161s
	[INFO] 10.244.2.2:41165 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231402s
	[INFO] 10.244.2.2:60230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118116s
	[INFO] 10.244.2.2:42062 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271609s
	[INFO] 10.244.0.4:49034 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067938s
	[INFO] 10.244.0.4:36002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196492s
	[INFO] 10.244.1.2:54186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124969s
	[INFO] 10.244.1.2:47709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000506218s
	[INFO] 10.244.0.4:54205 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087475s
	[INFO] 10.244.0.4:48802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055159s
	[INFO] 10.244.1.2:46825 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148852s
	[INFO] 10.244.2.2:60523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183145s
	[INFO] 10.244.0.4:53842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116944s
	[INFO] 10.244.0.4:56291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000217808s
	[INFO] 10.244.0.4:53612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00028657s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	
	
	==> coredns [4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e] <==
	[INFO] 10.244.0.4:43334 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001723638s
	[INFO] 10.244.0.4:54010 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080627s
	[INFO] 10.244.0.4:47700 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424459s
	[INFO] 10.244.0.4:50346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070487s
	[INFO] 10.244.0.4:43522 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051146s
	[INFO] 10.244.1.2:60157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099584s
	[INFO] 10.244.1.2:48809 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104515s
	[INFO] 10.244.2.2:37042 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132626s
	[INFO] 10.244.2.2:38343 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117546s
	[INFO] 10.244.2.2:53716 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092804s
	[INFO] 10.244.2.2:59881 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068808s
	[INFO] 10.244.0.4:40431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093051s
	[INFO] 10.244.0.4:39552 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087951s
	[INFO] 10.244.1.2:59301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113713s
	[INFO] 10.244.1.2:40299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210744s
	[INFO] 10.244.1.2:54276 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210063s
	[INFO] 10.244.2.2:34222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307653s
	[INFO] 10.244.2.2:42028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089936s
	[INFO] 10.244.2.2:47927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066426s
	[INFO] 10.244.0.4:39601 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085891s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2124518183]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:39:34.272) (total time: 10001ms):
	Trace[2124518183]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:39:44.273)
	Trace[2124518183]: [10.001560604s] [10.001560604s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e7ce840d2d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40346->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[175734570]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:39:36.470) (total time: 13095ms):
	Trace[175734570]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40346->10.96.0.1:443: read: connection reset by peer 13095ms (22:39:49.565)
	Trace[175734570]: [13.095388766s] [13.095388766s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40346->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49714->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49714->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-957517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_28_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:48:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:45:21 +0000   Sat, 31 Aug 2024 22:28:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-957517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 438078db78ee43a0bfe8057c915827a8
	  System UUID:                438078db-78ee-43a0-bfe8-057c915827a8
	  Boot ID:                    e88a2dfb-1351-416c-9b78-5a255e623f1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zdnwd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-6f6b679f8f-k7rsc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-6f6b679f8f-pc7gn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-957517                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-tkvsc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-957517             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-957517    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-xrp64                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-957517             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-957517                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   Starting                 8m4s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 19m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node ha-957517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node ha-957517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node ha-957517 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                    node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal   NodeReady                19m (x2 over 19m)      kubelet          Node ha-957517 status is now: NodeReady
	  Normal   RegisteredNode           18m                    node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Warning  ContainerGCFailed        9m54s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             9m15s (x2 over 9m40s)  kubelet          Node ha-957517 status is now: NodeNotReady
	  Normal   RegisteredNode           8m12s                  node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	  Normal   RegisteredNode           8m5s                   node-controller  Node ha-957517 event: Registered Node ha-957517 in Controller
	
	
	Name:               ha-957517-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_29_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:29:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:48:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:45:56 +0000   Sat, 31 Aug 2024 22:40:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-957517-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a152f180715c42228f54c353a9e8c1bb
	  System UUID:                a152f180-715c-4222-8f54-c353a9e8c1bb
	  Boot ID:                    53816b5a-a520-4752-84ea-97dfd1bb1a77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cwtrb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-957517-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-bmxh2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-957517-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-957517-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-dvpbk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-957517-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-957517-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m1s                   kube-proxy       
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-957517-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-957517-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-957517-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                    node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  NodeNotReady             15m                    node-controller  Node ha-957517-m02 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m33s (x8 over 8m34s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s (x8 over 8m34s)  kubelet          Node ha-957517-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s (x7 over 8m34s)  kubelet          Node ha-957517-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m12s                  node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	  Normal  RegisteredNode           8m5s                   node-controller  Node ha-957517-m02 event: Registered Node ha-957517-m02 in Controller
	
	
	Name:               ha-957517-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957517-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-957517
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_31_41_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:31:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957517-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:35:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 31 Aug 2024 22:32:11 +0000   Sat, 31 Aug 2024 22:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-957517-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08b180ad339e4d19acb3ea0e7328dc00
	  System UUID:                08b180ad-339e-4d19-acb3-ea0e7328dc00
	  Boot ID:                    eb027e2a-5c22-4721-9b4b-8b9696ccec09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2t9r8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-proxy-6f6xd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  RegisteredNode           16m                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-957517-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-957517-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-957517-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m12s              node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  RegisteredNode           8m5s               node-controller  Node ha-957517-m04 event: Registered Node ha-957517-m04 in Controller
	  Normal  NodeNotReady             7m32s              node-controller  Node ha-957517-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Aug31 22:28] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.064763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057170] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.193531] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.118523] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.278233] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.003192] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.620544] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058441] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.958169] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083987] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.815006] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.616164] kauditd_printk_skb: 38 callbacks suppressed
	[Aug31 22:29] kauditd_printk_skb: 24 callbacks suppressed
	[Aug31 22:39] systemd-fstab-generator[3479]: Ignoring "noauto" option for root device
	[  +0.149859] systemd-fstab-generator[3491]: Ignoring "noauto" option for root device
	[  +0.177091] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +0.139553] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	[  +0.274919] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.761462] systemd-fstab-generator[3640]: Ignoring "noauto" option for root device
	[  +3.640979] kauditd_printk_skb: 122 callbacks suppressed
	[ +14.497757] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.070411] kauditd_printk_skb: 1 callbacks suppressed
	[Aug31 22:40] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.831333] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321] <==
	{"level":"info","ts":"2024-08-31T22:41:05.334496Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.334841Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.353728Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.373785Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5527995f6263874a","to":"3a30a86b86970552","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-31T22:41:05.373849Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:41:05.383312Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5527995f6263874a","to":"3a30a86b86970552","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-31T22:41:05.383462Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:48:11.897133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=(6136041652267222858 17628752215980669721)"}
	{"level":"info","ts":"2024-08-31T22:48:11.899767Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","removed-remote-peer-id":"3a30a86b86970552","removed-remote-peer-urls":["https://192.168.39.26:2380"]}
	{"level":"info","ts":"2024-08-31T22:48:11.899874Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"warn","ts":"2024-08-31T22:48:11.900032Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:48:11.900087Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a30a86b86970552"}
	{"level":"warn","ts":"2024-08-31T22:48:11.900330Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:48:11.900457Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:48:11.900718Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"warn","ts":"2024-08-31T22:48:11.900956Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552","error":"context canceled"}
	{"level":"warn","ts":"2024-08-31T22:48:11.901058Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3a30a86b86970552","error":"failed to read 3a30a86b86970552 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-31T22:48:11.901115Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"warn","ts":"2024-08-31T22:48:11.901569Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-08-31T22:48:11.901682Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:48:11.901727Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:48:11.901762Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"5527995f6263874a","removed-remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:48:11.901824Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"5527995f6263874a","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"3a30a86b86970552"}
	{"level":"warn","ts":"2024-08-31T22:48:11.914237Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"5527995f6263874a","remote-peer-id-stream-handler":"5527995f6263874a","remote-peer-id-from":"3a30a86b86970552"}
	{"level":"warn","ts":"2024-08-31T22:48:11.923270Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"5527995f6263874a","remote-peer-id-stream-handler":"5527995f6263874a","remote-peer-id-from":"3a30a86b86970552"}
	
	
	==> etcd [f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18] <==
	2024/08/31 22:37:44 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/31 22:37:44 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-31T22:37:44.867346Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T22:37:44.867485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-31T22:37:44.867596Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"5527995f6263874a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-31T22:37:44.867783Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.867840Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.867900Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868053Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868113Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868180Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868211Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868234Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868277Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868325Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868480Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868535Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868564Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868593Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.871798Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"warn","ts":"2024-08-31T22:37:44.871868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.676535677s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-31T22:37:44.871944Z","caller":"traceutil/trace.go:171","msg":"trace[1670798017] range","detail":"{range_begin:; range_end:; }","duration":"8.676624743s","start":"2024-08-31T22:37:36.195311Z","end":"2024-08-31T22:37:44.871936Z","steps":["trace[1670798017] 'agreement among raft nodes before linearized reading'  (duration: 8.676534851s)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:37:44.871994Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-08-31T22:37:44.872025Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-957517","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	{"level":"error","ts":"2024-08-31T22:37:44.872015Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 22:48:15 up 20 min,  0 users,  load average: 0.08, 0.24, 0.30
	Linux ha-957517 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23] <==
	I0831 22:37:11.965462       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:21.965624       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:21.965663       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:37:21.965778       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:21.965799       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:21.965887       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:21.965908       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:21.965961       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:21.965983       1 main.go:299] handling current node
	I0831 22:37:31.963598       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:31.963696       1 main.go:299] handling current node
	I0831 22:37:31.963726       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:31.963744       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:37:31.963981       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:31.964015       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:31.964101       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:31.964130       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:41.972192       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:41.972307       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:41.972549       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:41.972573       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:41.972674       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:41.972700       1 main.go:299] handling current node
	I0831 22:37:41.972724       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:41.972729       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463] <==
	I0831 22:47:35.768580       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:47:45.771303       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:47:45.771623       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:47:45.771909       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:47:45.772023       1 main.go:299] handling current node
	I0831 22:47:45.772086       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:47:45.772115       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:47:45.772225       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:47:45.772246       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:47:55.767759       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:47:55.767870       1 main.go:299] handling current node
	I0831 22:47:55.767898       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:47:55.767916       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:47:55.768053       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:47:55.768073       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:47:55.768150       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:47:55.768184       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:48:05.761732       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:48:05.761788       1 main.go:299] handling current node
	I0831 22:48:05.761807       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:48:05.761815       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:48:05.762046       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:48:05.762056       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:48:05.762175       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:48:05.762185       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c57f67ede8a0054dd9f71f133da9ac07362144d23615517f6d51e423038dac66] <==
	I0831 22:39:24.893970       1 options.go:228] external host was not specified, using 192.168.39.137
	I0831 22:39:24.900043       1 server.go:142] Version: v1.31.0
	I0831 22:39:24.900146       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:39:25.753461       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0831 22:39:25.780467       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 22:39:25.782476       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0831 22:39:25.782509       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0831 22:39:25.782760       1 instance.go:232] Using reconciler: lease
	W0831 22:39:45.750450       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0831 22:39:45.750634       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0831 22:39:45.784017       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7] <==
	I0831 22:40:05.374239       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0831 22:40:05.472696       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 22:40:05.472791       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 22:40:05.472859       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 22:40:05.473055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 22:40:05.477475       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 22:40:05.478969       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0831 22:40:05.484658       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.26 192.168.39.61]
	I0831 22:40:05.487470       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 22:40:05.493626       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 22:40:05.493664       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 22:40:05.493723       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 22:40:05.493771       1 aggregator.go:171] initial CRD sync complete...
	I0831 22:40:05.493807       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 22:40:05.493831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 22:40:05.493854       1 cache.go:39] Caches are synced for autoregister controller
	I0831 22:40:05.511569       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 22:40:05.511608       1 policy_source.go:224] refreshing policies
	I0831 22:40:05.563317       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 22:40:05.586662       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 22:40:05.594886       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0831 22:40:05.601068       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0831 22:40:06.390155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0831 22:40:06.716212       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137 192.168.39.26 192.168.39.61]
	W0831 22:40:16.715522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137 192.168.39.61]
	
	
	==> kube-controller-manager [97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef] <==
	I0831 22:39:25.523610       1 serving.go:386] Generated self-signed cert in-memory
	I0831 22:39:26.109857       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0831 22:39:26.110071       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:39:26.113096       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 22:39:26.113292       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 22:39:26.113913       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0831 22:39:26.114016       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0831 22:39:46.790111       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.137:8443/healthz\": dial tcp 192.168.39.137:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7] <==
	I0831 22:40:55.373161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:55.390183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:55.480686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:40:55.587243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:40:56.212165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.842µs"
	I0831 22:40:59.313150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m04"
	I0831 22:41:16.680676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.88817ms"
	I0831 22:41:16.681970       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.907µs"
	I0831 22:41:25.473132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:45:21.065296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517"
	I0831 22:45:56.005326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m02"
	I0831 22:46:30.597281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:48:08.518727       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:48:08.553073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	I0831 22:48:08.608645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.087823ms"
	I0831 22:48:08.676826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.113048ms"
	I0831 22:48:08.712000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.883245ms"
	E0831 22:48:08.712086       1 replica_set.go:560] "Unhandled Error" err="sync \"default/busybox-7dff88458\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7dff88458\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0831 22:48:08.713531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="86.557µs"
	I0831 22:48:08.718937       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.576µs"
	I0831 22:48:10.710293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.734µs"
	I0831 22:48:11.082264       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="81.184µs"
	I0831 22:48:11.091006       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.458µs"
	I0831 22:48:12.831355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-957517-m03"
	E0831 22:48:12.879120       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-957517-m03\", UID:\"82eccc39-d22d-443c-be2a-74a642f1c42f\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-957517-m03\", UID:\"0c1d87ca-a827-44e9-950d-f70a7e3b9bc5\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-957517-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:39:26.911344       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:29.982883       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:33.055492       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:39.201473       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:51.487041       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0831 22:40:10.488958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E0831 22:40:10.489188       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:40:10.537261       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:40:10.537354       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:40:10.537501       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:40:10.542452       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:40:10.542992       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:40:10.543052       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:40:10.545123       1 config.go:197] "Starting service config controller"
	I0831 22:40:10.545206       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:40:10.545249       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:40:10.545277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:40:10.546250       1 config.go:326] "Starting node config controller"
	I0831 22:40:10.546438       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:40:10.646348       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:40:10.646448       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:40:10.646525       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d] <==
	E0831 22:36:21.374054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:21.374474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:21.374644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:29.181771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0831 22:36:29.181910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:29.181978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:29.182082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:29.182138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0831 22:36:29.183726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.022821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.022959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.023030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.023072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.023128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.023169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:56.381866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:56.382427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:56.382547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:56.382584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:05.598350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:05.598532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:33.246415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:33.246482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:39.390329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:39.390566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb] <==
	W0831 22:39:55.703786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.137:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:55.703902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.137:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:39:55.733952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.137:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:55.734020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.137:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:39:56.221045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.137:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:56.221132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.137:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:39:56.251131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.137:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:39:56.251271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.137:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:02.013299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.137:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:40:02.013444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.137:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:02.529215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.137:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:40:02.529971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.137:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:02.864927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.137:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:40:02.864971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.137:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:40:05.437244       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:40:05.437838       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:40:05.437671       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:40:05.438221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:40:05.437720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:40:05.438340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:40:05.437777       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:40:05.438466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:40:05.438193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:40:05.438537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:40:23.801533       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3] <==
	E0831 22:31:40.726228       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2t9r8\": pod kindnet-2t9r8 is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-2t9r8"
	I0831 22:31:40.726253       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2t9r8" node="ha-957517-m04"
	E0831 22:31:40.731781       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:31:40.731866       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3457f0a0-fd3b-4e40-819f-9d57c29036e6(kube-system/kindnet-mljxh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mljxh"
	E0831 22:31:40.731884       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-mljxh"
	I0831 22:31:40.731900       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:37:32.346967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0831 22:37:32.516236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0831 22:37:35.083345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0831 22:37:35.963186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0831 22:37:36.540206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0831 22:37:36.648180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0831 22:37:37.049684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:38.229658       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0831 22:37:38.502251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0831 22:37:38.836806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:39.474109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:39.769349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0831 22:37:41.935022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0831 22:37:42.328066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:43.701831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	I0831 22:37:44.800445       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0831 22:37:44.800584       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0831 22:37:44.800769       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0831 22:37:44.803420       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 31 22:46:41 ha-957517 kubelet[1303]: E0831 22:46:41.701159    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144401693327224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:41 ha-957517 kubelet[1303]: E0831 22:46:41.701537    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144401693327224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:51 ha-957517 kubelet[1303]: E0831 22:46:51.703551    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144411703058049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:46:51 ha-957517 kubelet[1303]: E0831 22:46:51.703588    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144411703058049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:01 ha-957517 kubelet[1303]: E0831 22:47:01.706417    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144421705733229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:01 ha-957517 kubelet[1303]: E0831 22:47:01.706462    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144421705733229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:11 ha-957517 kubelet[1303]: E0831 22:47:11.709511    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144431708085741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:11 ha-957517 kubelet[1303]: E0831 22:47:11.710041    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144431708085741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:21 ha-957517 kubelet[1303]: E0831 22:47:21.414553    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 22:47:21 ha-957517 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 22:47:21 ha-957517 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 22:47:21 ha-957517 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 22:47:21 ha-957517 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 22:47:21 ha-957517 kubelet[1303]: E0831 22:47:21.714614    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144441714228907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:21 ha-957517 kubelet[1303]: E0831 22:47:21.714638    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144441714228907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:31 ha-957517 kubelet[1303]: E0831 22:47:31.716606    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144451715648191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:31 ha-957517 kubelet[1303]: E0831 22:47:31.716666    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144451715648191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:41 ha-957517 kubelet[1303]: E0831 22:47:41.724156    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144461719202894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:41 ha-957517 kubelet[1303]: E0831 22:47:41.724259    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144461719202894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:51 ha-957517 kubelet[1303]: E0831 22:47:51.725648    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144471725319807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:47:51 ha-957517 kubelet[1303]: E0831 22:47:51.725988    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144471725319807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:48:01 ha-957517 kubelet[1303]: E0831 22:48:01.728129    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144481727545278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:48:01 ha-957517 kubelet[1303]: E0831 22:48:01.729741    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144481727545278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:48:11 ha-957517 kubelet[1303]: E0831 22:48:11.733356    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144491733064531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:48:11 ha-957517 kubelet[1303]: E0831 22:48:11.733427    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144491733064531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 22:48:14.332110   41443 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18943-13149/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-957517 -n ha-957517
helpers_test.go:262: (dbg) Run:  kubectl --context ha-957517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox-7dff88458-bqkbm kube-apiserver-ha-957517-m03
helpers_test.go:275: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context ha-957517 describe pod busybox-7dff88458-bqkbm kube-apiserver-ha-957517-m03
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context ha-957517 describe pod busybox-7dff88458-bqkbm kube-apiserver-ha-957517-m03: exit status 1 (76.101943ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-bqkbm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgjmd (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-cgjmd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age              From               Message
	  ----     ------            ----             ----               -------
	  Warning  FailedScheduling  6s (x2 over 8s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  6s (x2 over 8s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kube-apiserver-ha-957517-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context ha-957517 describe pod busybox-7dff88458-bqkbm kube-apiserver-ha-957517-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (173.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 stop -v=7 --alsologtostderr
E0831 22:49:59.875045   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 stop -v=7 --alsologtostderr: exit status 82 (2m2.307830145s)

                                                
                                                
-- stdout --
	* Stopping node "ha-957517-m04"  ...
	* Stopping node "ha-957517-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:48:16.795071   41562 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:48:16.795200   41562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:48:16.795210   41562 out.go:358] Setting ErrFile to fd 2...
	I0831 22:48:16.795214   41562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:48:16.795429   41562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:48:16.795643   41562 out.go:352] Setting JSON to false
	I0831 22:48:16.795715   41562 mustload.go:65] Loading cluster: ha-957517
	I0831 22:48:16.796041   41562 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:48:16.796125   41562 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:48:16.796288   41562 mustload.go:65] Loading cluster: ha-957517
	I0831 22:48:16.796414   41562 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:48:16.796442   41562 stop.go:39] StopHost: ha-957517-m04
	I0831 22:48:16.796918   41562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:16.796965   41562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:16.811570   41562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0831 22:48:16.812000   41562 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:16.812479   41562 main.go:141] libmachine: Using API Version  1
	I0831 22:48:16.812516   41562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:16.812901   41562 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:16.815660   41562 out.go:177] * Stopping node "ha-957517-m04"  ...
	I0831 22:48:16.816865   41562 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0831 22:48:16.816902   41562 main.go:141] libmachine: (ha-957517-m04) Calling .DriverName
	I0831 22:48:16.817115   41562 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0831 22:48:16.817136   41562 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:48:16.818681   41562 retry.go:31] will retry after 281.344744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0831 22:48:17.101204   41562 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:48:17.103046   41562 retry.go:31] will retry after 464.645486ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0831 22:48:17.568490   41562 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:48:17.570169   41562 retry.go:31] will retry after 328.898523ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0831 22:48:17.899704   41562 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	I0831 22:48:17.901522   41562 retry.go:31] will retry after 747.481333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0831 22:48:18.649128   41562 main.go:141] libmachine: (ha-957517-m04) Calling .GetSSHHostname
	W0831 22:48:18.651015   41562 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0831 22:48:18.651049   41562 main.go:141] libmachine: Stopping "ha-957517-m04"...
	I0831 22:48:18.651058   41562 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:48:18.652298   41562 stop.go:66] stop err: Machine "ha-957517-m04" is already stopped.
	I0831 22:48:18.652339   41562 stop.go:69] host is already stopped
	I0831 22:48:18.652350   41562 stop.go:39] StopHost: ha-957517-m02
	I0831 22:48:18.652625   41562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:48:18.652663   41562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:48:18.667868   41562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0831 22:48:18.668251   41562 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:48:18.668628   41562 main.go:141] libmachine: Using API Version  1
	I0831 22:48:18.668648   41562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:48:18.668955   41562 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:48:18.671858   41562 out.go:177] * Stopping node "ha-957517-m02"  ...
	I0831 22:48:18.673089   41562 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0831 22:48:18.673152   41562 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:48:18.673403   41562 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0831 22:48:18.673426   41562 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:48:18.676202   41562 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:48:18.676626   41562 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:39:29 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:48:18.676660   41562 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:48:18.676785   41562 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:48:18.676960   41562 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:48:18.677088   41562 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:48:18.677249   41562 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	I0831 22:48:18.759012   41562 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0831 22:48:18.812656   41562 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0831 22:48:18.866430   41562 main.go:141] libmachine: Stopping "ha-957517-m02"...
	I0831 22:48:18.866457   41562 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:48:18.868063   41562 main.go:141] libmachine: (ha-957517-m02) Calling .Stop
	I0831 22:48:18.871843   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 0/120
	I0831 22:48:19.873419   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 1/120
	I0831 22:48:20.874879   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 2/120
	I0831 22:48:21.876154   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 3/120
	I0831 22:48:22.877767   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 4/120
	I0831 22:48:23.879758   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 5/120
	I0831 22:48:24.881370   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 6/120
	I0831 22:48:25.883044   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 7/120
	I0831 22:48:26.884318   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 8/120
	I0831 22:48:27.885682   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 9/120
	I0831 22:48:28.887758   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 10/120
	I0831 22:48:29.889915   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 11/120
	I0831 22:48:30.891179   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 12/120
	I0831 22:48:31.892632   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 13/120
	I0831 22:48:32.894502   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 14/120
	I0831 22:48:33.896602   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 15/120
	I0831 22:48:34.898061   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 16/120
	I0831 22:48:35.899571   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 17/120
	I0831 22:48:36.900753   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 18/120
	I0831 22:48:37.902071   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 19/120
	I0831 22:48:38.904318   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 20/120
	I0831 22:48:39.905829   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 21/120
	I0831 22:48:40.907356   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 22/120
	I0831 22:48:41.908781   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 23/120
	I0831 22:48:42.910334   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 24/120
	I0831 22:48:43.912158   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 25/120
	I0831 22:48:44.913698   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 26/120
	I0831 22:48:45.915359   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 27/120
	I0831 22:48:46.916784   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 28/120
	I0831 22:48:47.918653   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 29/120
	I0831 22:48:48.920481   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 30/120
	I0831 22:48:49.922505   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 31/120
	I0831 22:48:50.924040   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 32/120
	I0831 22:48:51.925362   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 33/120
	I0831 22:48:52.926585   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 34/120
	I0831 22:48:53.928583   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 35/120
	I0831 22:48:54.929840   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 36/120
	I0831 22:48:55.931078   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 37/120
	I0831 22:48:56.932463   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 38/120
	I0831 22:48:57.933628   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 39/120
	I0831 22:48:58.935156   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 40/120
	I0831 22:48:59.936373   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 41/120
	I0831 22:49:00.937640   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 42/120
	I0831 22:49:01.938922   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 43/120
	I0831 22:49:02.940297   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 44/120
	I0831 22:49:03.941881   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 45/120
	I0831 22:49:04.943201   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 46/120
	I0831 22:49:05.944362   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 47/120
	I0831 22:49:06.945773   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 48/120
	I0831 22:49:07.947207   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 49/120
	I0831 22:49:08.948936   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 50/120
	I0831 22:49:09.950928   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 51/120
	I0831 22:49:10.952299   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 52/120
	I0831 22:49:11.953506   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 53/120
	I0831 22:49:12.954874   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 54/120
	I0831 22:49:13.956627   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 55/120
	I0831 22:49:14.958071   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 56/120
	I0831 22:49:15.959298   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 57/120
	I0831 22:49:16.960878   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 58/120
	I0831 22:49:17.962351   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 59/120
	I0831 22:49:18.963915   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 60/120
	I0831 22:49:19.965686   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 61/120
	I0831 22:49:20.966928   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 62/120
	I0831 22:49:21.968953   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 63/120
	I0831 22:49:22.970351   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 64/120
	I0831 22:49:23.972105   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 65/120
	I0831 22:49:24.973284   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 66/120
	I0831 22:49:25.974462   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 67/120
	I0831 22:49:26.975868   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 68/120
	I0831 22:49:27.977033   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 69/120
	I0831 22:49:28.978655   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 70/120
	I0831 22:49:29.980215   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 71/120
	I0831 22:49:30.981692   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 72/120
	I0831 22:49:31.982834   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 73/120
	I0831 22:49:32.984188   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 74/120
	I0831 22:49:33.985942   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 75/120
	I0831 22:49:34.987145   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 76/120
	I0831 22:49:35.988656   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 77/120
	I0831 22:49:36.990000   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 78/120
	I0831 22:49:37.991474   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 79/120
	I0831 22:49:38.993357   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 80/120
	I0831 22:49:39.994650   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 81/120
	I0831 22:49:40.996016   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 82/120
	I0831 22:49:41.997188   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 83/120
	I0831 22:49:42.998716   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 84/120
	I0831 22:49:44.000598   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 85/120
	I0831 22:49:45.002085   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 86/120
	I0831 22:49:46.003390   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 87/120
	I0831 22:49:47.004678   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 88/120
	I0831 22:49:48.006084   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 89/120
	I0831 22:49:49.008567   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 90/120
	I0831 22:49:50.009748   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 91/120
	I0831 22:49:51.011072   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 92/120
	I0831 22:49:52.012331   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 93/120
	I0831 22:49:53.013553   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 94/120
	I0831 22:49:54.015352   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 95/120
	I0831 22:49:55.016570   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 96/120
	I0831 22:49:56.017706   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 97/120
	I0831 22:49:57.018900   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 98/120
	I0831 22:49:58.020108   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 99/120
	I0831 22:49:59.021689   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 100/120
	I0831 22:50:00.023571   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 101/120
	I0831 22:50:01.024768   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 102/120
	I0831 22:50:02.025972   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 103/120
	I0831 22:50:03.027183   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 104/120
	I0831 22:50:04.029164   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 105/120
	I0831 22:50:05.030437   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 106/120
	I0831 22:50:06.031798   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 107/120
	I0831 22:50:07.033255   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 108/120
	I0831 22:50:08.034791   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 109/120
	I0831 22:50:09.036603   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 110/120
	I0831 22:50:10.038044   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 111/120
	I0831 22:50:11.039466   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 112/120
	I0831 22:50:12.040832   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 113/120
	I0831 22:50:13.042080   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 114/120
	I0831 22:50:14.043738   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 115/120
	I0831 22:50:15.045174   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 116/120
	I0831 22:50:16.046491   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 117/120
	I0831 22:50:17.048055   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 118/120
	I0831 22:50:18.049866   41562 main.go:141] libmachine: (ha-957517-m02) Waiting for machine to stop 119/120
	I0831 22:50:19.050800   41562 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0831 22:50:19.050857   41562 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0831 22:50:19.053034   41562 out.go:201] 
	W0831 22:50:19.054410   41562 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0831 22:50:19.054437   41562 out.go:270] * 
	* 
	W0831 22:50:19.057608   41562 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 22:50:19.059892   41562 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-957517 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr: exit status 7 (33.729198299s)

                                                
                                                
-- stdout --
	ha-957517
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-957517-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-957517-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:50:19.106013   42032 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:50:19.106122   42032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:50:19.106129   42032 out.go:358] Setting ErrFile to fd 2...
	I0831 22:50:19.106134   42032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:50:19.106295   42032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:50:19.106456   42032 out.go:352] Setting JSON to false
	I0831 22:50:19.106478   42032 mustload.go:65] Loading cluster: ha-957517
	I0831 22:50:19.106628   42032 notify.go:220] Checking for updates...
	I0831 22:50:19.106870   42032 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:50:19.106887   42032 status.go:255] checking status of ha-957517 ...
	I0831 22:50:19.107378   42032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:50:19.107449   42032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:50:19.122245   42032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I0831 22:50:19.122533   42032 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:50:19.123151   42032 main.go:141] libmachine: Using API Version  1
	I0831 22:50:19.123170   42032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:50:19.123557   42032 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:50:19.123784   42032 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:50:19.125334   42032 status.go:330] ha-957517 host status = "Running" (err=<nil>)
	I0831 22:50:19.125352   42032 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:50:19.125654   42032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:50:19.125686   42032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:50:19.139806   42032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0831 22:50:19.140235   42032 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:50:19.140634   42032 main.go:141] libmachine: Using API Version  1
	I0831 22:50:19.140655   42032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:50:19.140944   42032 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:50:19.141139   42032 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:50:19.143829   42032 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:50:19.144226   42032 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:50:19.144244   42032 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:50:19.144410   42032 host.go:66] Checking if "ha-957517" exists ...
	I0831 22:50:19.144677   42032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:50:19.144714   42032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:50:19.158795   42032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38265
	I0831 22:50:19.159217   42032 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:50:19.159687   42032 main.go:141] libmachine: Using API Version  1
	I0831 22:50:19.159705   42032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:50:19.159964   42032 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:50:19.160142   42032 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:50:19.160329   42032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:50:19.160356   42032 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:50:19.162815   42032 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:50:19.163207   42032 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:50:19.163240   42032 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:50:19.163373   42032 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:50:19.163529   42032 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:50:19.163724   42032 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:50:19.163859   42032 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:50:19.249259   42032 ssh_runner.go:195] Run: systemctl --version
	I0831 22:50:19.257404   42032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:50:19.277068   42032 kubeconfig.go:125] found "ha-957517" server: "https://192.168.39.254:8443"
	I0831 22:50:19.277102   42032 api_server.go:166] Checking apiserver status ...
	I0831 22:50:19.277142   42032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:50:19.296507   42032 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5747/cgroup
	W0831 22:50:19.308500   42032 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5747/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 22:50:19.308547   42032 ssh_runner.go:195] Run: ls
	I0831 22:50:19.314331   42032 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:50:24.314973   42032 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 22:50:24.315079   42032 retry.go:31] will retry after 203.257167ms: state is "Stopped"
	I0831 22:50:24.518428   42032 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:50:29.518740   42032 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 22:50:29.518791   42032 retry.go:31] will retry after 376.977767ms: state is "Stopped"
	I0831 22:50:29.896331   42032 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:50:30.627583   42032 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0831 22:50:30.627628   42032 retry.go:31] will retry after 466.236254ms: state is "Stopped"
	I0831 22:50:31.094238   42032 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0831 22:50:34.151550   42032 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0831 22:50:34.151602   42032 status.go:422] ha-957517 apiserver status = Running (err=<nil>)
	I0831 22:50:34.151611   42032 status.go:257] ha-957517 status: &{Name:ha-957517 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:50:34.151630   42032 status.go:255] checking status of ha-957517-m02 ...
	I0831 22:50:34.151932   42032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:50:34.151971   42032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:50:34.166442   42032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0831 22:50:34.166807   42032 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:50:34.167270   42032 main.go:141] libmachine: Using API Version  1
	I0831 22:50:34.167287   42032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:50:34.167641   42032 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:50:34.167805   42032 main.go:141] libmachine: (ha-957517-m02) Calling .GetState
	I0831 22:50:34.169442   42032 status.go:330] ha-957517-m02 host status = "Running" (err=<nil>)
	I0831 22:50:34.169458   42032 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:50:34.169780   42032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:50:34.169816   42032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:50:34.184917   42032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0831 22:50:34.185300   42032 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:50:34.185800   42032 main.go:141] libmachine: Using API Version  1
	I0831 22:50:34.185821   42032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:50:34.186093   42032 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:50:34.186309   42032 main.go:141] libmachine: (ha-957517-m02) Calling .GetIP
	I0831 22:50:34.188795   42032 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:50:34.189188   42032 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:39:29 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:50:34.189216   42032 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:50:34.189378   42032 host.go:66] Checking if "ha-957517-m02" exists ...
	I0831 22:50:34.189668   42032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:50:34.189700   42032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:50:34.203551   42032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42599
	I0831 22:50:34.203917   42032 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:50:34.204348   42032 main.go:141] libmachine: Using API Version  1
	I0831 22:50:34.204365   42032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:50:34.204647   42032 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:50:34.204825   42032 main.go:141] libmachine: (ha-957517-m02) Calling .DriverName
	I0831 22:50:34.204998   42032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:50:34.205018   42032 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHHostname
	I0831 22:50:34.207339   42032 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:50:34.207790   42032 main.go:141] libmachine: (ha-957517-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:98", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:39:29 +0000 UTC Type:0 Mac:52:54:00:d0:a3:98 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-957517-m02 Clientid:01:52:54:00:d0:a3:98}
	I0831 22:50:34.207817   42032 main.go:141] libmachine: (ha-957517-m02) DBG | domain ha-957517-m02 has defined IP address 192.168.39.61 and MAC address 52:54:00:d0:a3:98 in network mk-ha-957517
	I0831 22:50:34.207930   42032 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHPort
	I0831 22:50:34.208077   42032 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHKeyPath
	I0831 22:50:34.208231   42032 main.go:141] libmachine: (ha-957517-m02) Calling .GetSSHUsername
	I0831 22:50:34.208341   42032 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517-m02/id_rsa Username:docker}
	W0831 22:50:52.771567   42032 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0831 22:50:52.771671   42032 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0831 22:50:52.771689   42032 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:50:52.771697   42032 status.go:257] ha-957517-m02 status: &{Name:ha-957517-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 22:50:52.771714   42032 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0831 22:50:52.771721   42032 status.go:255] checking status of ha-957517-m04 ...
	I0831 22:50:52.772013   42032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:50:52.772061   42032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:50:52.787438   42032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0831 22:50:52.787980   42032 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:50:52.788525   42032 main.go:141] libmachine: Using API Version  1
	I0831 22:50:52.788547   42032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:50:52.788861   42032 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:50:52.789026   42032 main.go:141] libmachine: (ha-957517-m04) Calling .GetState
	I0831 22:50:52.790452   42032 status.go:330] ha-957517-m04 host status = "Stopped" (err=<nil>)
	I0831 22:50:52.790465   42032 status.go:343] host is not running, skipping remaining checks
	I0831 22:50:52.790473   42032 status.go:257] ha-957517-m04 status: &{Name:ha-957517-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr": ha-957517
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-957517-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-957517-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr": ha-957517
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-957517-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-957517-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr": ha-957517
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-957517-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-957517-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-957517 -n ha-957517
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-957517 -n ha-957517: exit status 2 (15.600079179s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p ha-957517 logs -n 25: (1.408668005s)
helpers_test.go:253: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m04 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp testdata/cp-test.txt                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517:/home/docker/cp-test_ha-957517-m04_ha-957517.txt                       |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517 sudo cat                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517.txt                                 |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m02:/home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m02 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m03:/home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n                                                                 | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | ha-957517-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-957517 ssh -n ha-957517-m03 sudo cat                                          | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	|         | /home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-957517 node stop m02 -v=7                                                     | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-957517 node start m02 -v=7                                                    | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-957517 -v=7                                                           | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-957517 -v=7                                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-957517 --wait=true -v=7                                                    | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-957517                                                                | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC |                     |
	| node    | ha-957517 node delete m03 -v=7                                                   | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-957517 stop -v=7                                                              | ha-957517 | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:37:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:37:43.980883   38680 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:37:43.981003   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981012   38680 out.go:358] Setting ErrFile to fd 2...
	I0831 22:37:43.981017   38680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:43.981185   38680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:37:43.981743   38680 out.go:352] Setting JSON to false
	I0831 22:37:43.982668   38680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4811,"bootTime":1725139053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:37:43.982721   38680 start.go:139] virtualization: kvm guest
	I0831 22:37:43.985184   38680 out.go:177] * [ha-957517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:37:43.986509   38680 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:37:43.986515   38680 notify.go:220] Checking for updates...
	I0831 22:37:43.989086   38680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:37:43.990438   38680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:37:43.991747   38680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:37:43.992969   38680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:37:43.994015   38680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:37:43.995541   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:43.995622   38680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:37:43.995993   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:43.996068   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.011776   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0831 22:37:44.012162   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.012650   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.012667   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.012988   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.013198   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.046996   38680 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 22:37:44.048362   38680 start.go:297] selected driver: kvm2
	I0831 22:37:44.048377   38680 start.go:901] validating driver "kvm2" against &{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.048522   38680 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:37:44.048853   38680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.048953   38680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:37:44.063722   38680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:37:44.064393   38680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:37:44.064470   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:37:44.064486   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:37:44.064562   38680 start.go:340] cluster config:
	{Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:37:44.064759   38680 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:37:44.066648   38680 out.go:177] * Starting "ha-957517" primary control-plane node in "ha-957517" cluster
	I0831 22:37:44.067887   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:37:44.067918   38680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:37:44.067925   38680 cache.go:56] Caching tarball of preloaded images
	I0831 22:37:44.067991   38680 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 22:37:44.068000   38680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:37:44.068132   38680 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/config.json ...
	I0831 22:37:44.068445   38680 start.go:360] acquireMachinesLock for ha-957517: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:37:44.068502   38680 start.go:364] duration metric: took 31.59µs to acquireMachinesLock for "ha-957517"
	I0831 22:37:44.068521   38680 start.go:96] Skipping create...Using existing machine configuration
	I0831 22:37:44.068531   38680 fix.go:54] fixHost starting: 
	I0831 22:37:44.068854   38680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:37:44.068905   38680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:37:44.082801   38680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0831 22:37:44.083254   38680 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:37:44.083798   38680 main.go:141] libmachine: Using API Version  1
	I0831 22:37:44.083819   38680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:37:44.084088   38680 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:37:44.084260   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.084412   38680 main.go:141] libmachine: (ha-957517) Calling .GetState
	I0831 22:37:44.086212   38680 fix.go:112] recreateIfNeeded on ha-957517: state=Running err=<nil>
	W0831 22:37:44.086242   38680 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 22:37:44.088190   38680 out.go:177] * Updating the running kvm2 "ha-957517" VM ...
	I0831 22:37:44.089385   38680 machine.go:93] provisionDockerMachine start ...
	I0831 22:37:44.089401   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:37:44.089624   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.092086   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092623   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.092649   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.092785   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.092955   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093100   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.093214   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.093355   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.093526   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.093536   38680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:37:44.200556   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.200584   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.200847   38680 buildroot.go:166] provisioning hostname "ha-957517"
	I0831 22:37:44.200870   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.201116   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.203857   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204273   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.204297   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.204424   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.204626   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204766   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.204881   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.205020   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.205217   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.205231   38680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957517 && echo "ha-957517" | sudo tee /etc/hostname
	I0831 22:37:44.330466   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957517
	
	I0831 22:37:44.330490   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.333462   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333829   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.333868   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.333997   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.334236   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334427   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.334627   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.334794   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.334953   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.334968   38680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957517/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:37:44.440566   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:37:44.440594   38680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 22:37:44.440626   38680 buildroot.go:174] setting up certificates
	I0831 22:37:44.440636   38680 provision.go:84] configureAuth start
	I0831 22:37:44.440648   38680 main.go:141] libmachine: (ha-957517) Calling .GetMachineName
	I0831 22:37:44.440934   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:37:44.443531   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.443928   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.443954   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.444251   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.446892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447301   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.447348   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.447478   38680 provision.go:143] copyHostCerts
	I0831 22:37:44.447502   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447538   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 22:37:44.447557   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 22:37:44.447632   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 22:37:44.447757   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447782   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 22:37:44.447790   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 22:37:44.447831   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 22:37:44.447904   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447927   38680 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 22:37:44.447935   38680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 22:37:44.447966   38680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 22:37:44.448033   38680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.ha-957517 san=[127.0.0.1 192.168.39.137 ha-957517 localhost minikube]
	I0831 22:37:44.517123   38680 provision.go:177] copyRemoteCerts
	I0831 22:37:44.517176   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:37:44.517197   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.519747   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520161   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.520195   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.520321   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.520494   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.520656   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.520777   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:37:44.602311   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 22:37:44.602376   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:37:44.631362   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 22:37:44.631445   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0831 22:37:44.663123   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 22:37:44.663190   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:37:44.691526   38680 provision.go:87] duration metric: took 250.877979ms to configureAuth
	I0831 22:37:44.691553   38680 buildroot.go:189] setting minikube options for container-runtime
	I0831 22:37:44.691854   38680 config.go:182] Loaded profile config "ha-957517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:37:44.691944   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:37:44.694465   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.694868   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:37:44.694892   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:37:44.695159   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:37:44.695350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695512   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:37:44.695618   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:37:44.695764   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:37:44.695955   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:37:44.695971   38680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:39:15.634828   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:39:15.634858   38680 machine.go:96] duration metric: took 1m31.54546155s to provisionDockerMachine
	I0831 22:39:15.634870   38680 start.go:293] postStartSetup for "ha-957517" (driver="kvm2")
	I0831 22:39:15.634881   38680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:39:15.634896   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.635202   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:39:15.635227   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.638236   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638748   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.638776   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.638909   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.639093   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.639293   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.639429   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.722855   38680 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:39:15.727014   38680 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 22:39:15.727034   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 22:39:15.727097   38680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 22:39:15.727199   38680 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 22:39:15.727212   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 22:39:15.727302   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 22:39:15.736663   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:15.760234   38680 start.go:296] duration metric: took 125.353074ms for postStartSetup
	I0831 22:39:15.760279   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.760559   38680 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0831 22:39:15.760588   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.763201   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763613   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.763633   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.763770   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.763954   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.764091   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.764216   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	W0831 22:39:15.846286   38680 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0831 22:39:15.846323   38680 fix.go:56] duration metric: took 1m31.777792266s for fixHost
	I0831 22:39:15.846350   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.848916   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849334   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.849365   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.849543   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.849722   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.849879   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.850019   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.850187   38680 main.go:141] libmachine: Using SSH client type: native
	I0831 22:39:15.850351   38680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0831 22:39:15.850361   38680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 22:39:15.951938   38680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725143955.913478582
	
	I0831 22:39:15.951960   38680 fix.go:216] guest clock: 1725143955.913478582
	I0831 22:39:15.951967   38680 fix.go:229] Guest: 2024-08-31 22:39:15.913478582 +0000 UTC Remote: 2024-08-31 22:39:15.846332814 +0000 UTC m=+91.900956878 (delta=67.145768ms)
	I0831 22:39:15.951984   38680 fix.go:200] guest clock delta is within tolerance: 67.145768ms
	I0831 22:39:15.951989   38680 start.go:83] releasing machines lock for "ha-957517", held for 1m31.883475675s
	I0831 22:39:15.952012   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.952276   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:15.955057   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955473   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.955502   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.955634   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956283   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956455   38680 main.go:141] libmachine: (ha-957517) Calling .DriverName
	I0831 22:39:15.956567   38680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:39:15.956617   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.956632   38680 ssh_runner.go:195] Run: cat /version.json
	I0831 22:39:15.956655   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHHostname
	I0831 22:39:15.959097   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959114   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959529   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959554   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959578   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:15.959597   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:15.959696   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959871   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHPort
	I0831 22:39:15.959900   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960042   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHKeyPath
	I0831 22:39:15.960055   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960225   38680 main.go:141] libmachine: (ha-957517) Calling .GetSSHUsername
	I0831 22:39:15.960234   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:15.960339   38680 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/ha-957517/id_rsa Username:docker}
	I0831 22:39:16.064767   38680 ssh_runner.go:195] Run: systemctl --version
	I0831 22:39:16.070863   38680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:39:16.230376   38680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 22:39:16.236729   38680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:39:16.236783   38680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:39:16.245939   38680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 22:39:16.245960   38680 start.go:495] detecting cgroup driver to use...
	I0831 22:39:16.246006   38680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:39:16.261896   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:39:16.276357   38680 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:39:16.276410   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:39:16.289922   38680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:39:16.302913   38680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:39:16.451294   38680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:39:16.596005   38680 docker.go:233] disabling docker service ...
	I0831 22:39:16.596062   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:39:16.612423   38680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:39:16.625984   38680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:39:16.769630   38680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:39:16.915592   38680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:39:16.929353   38680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:39:16.949875   38680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:39:16.949927   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.960342   38680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:39:16.960402   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.970745   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.980972   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:16.991090   38680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:39:17.001258   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.011096   38680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.021887   38680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:39:17.031682   38680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:39:17.040513   38680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:39:17.049301   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.194385   38680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:39:17.428315   38680 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:39:17.428408   38680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:39:17.433515   38680 start.go:563] Will wait 60s for crictl version
	I0831 22:39:17.433556   38680 ssh_runner.go:195] Run: which crictl
	I0831 22:39:17.437499   38680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:39:17.479960   38680 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 22:39:17.480026   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.515314   38680 ssh_runner.go:195] Run: crio --version
	I0831 22:39:17.547505   38680 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 22:39:17.548632   38680 main.go:141] libmachine: (ha-957517) Calling .GetIP
	I0831 22:39:17.550955   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551269   38680 main.go:141] libmachine: (ha-957517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:42:4f", ip: ""} in network mk-ha-957517: {Iface:virbr1 ExpiryTime:2024-08-31 23:27:55 +0000 UTC Type:0 Mac:52:54:00:e0:42:4f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-957517 Clientid:01:52:54:00:e0:42:4f}
	I0831 22:39:17.551296   38680 main.go:141] libmachine: (ha-957517) DBG | domain ha-957517 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:42:4f in network mk-ha-957517
	I0831 22:39:17.551521   38680 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 22:39:17.556237   38680 kubeadm.go:883] updating cluster {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:39:17.556363   38680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:39:17.556415   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.600319   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.600339   38680 crio.go:433] Images already preloaded, skipping extraction
	I0831 22:39:17.600382   38680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:39:17.634386   38680 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:39:17.634406   38680 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:39:17.634416   38680 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.31.0 crio true true} ...
	I0831 22:39:17.634526   38680 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:39:17.634618   38680 ssh_runner.go:195] Run: crio config
	I0831 22:39:17.682178   38680 cni.go:84] Creating CNI manager for ""
	I0831 22:39:17.682203   38680 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 22:39:17.682220   38680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:39:17.682240   38680 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957517 NodeName:ha-957517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:39:17.682375   38680 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-957517"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:39:17.682399   38680 kube-vip.go:115] generating kube-vip config ...
	I0831 22:39:17.682439   38680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0831 22:39:17.694650   38680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 22:39:17.694772   38680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 22:39:17.694843   38680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:39:17.705040   38680 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:39:17.705103   38680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 22:39:17.714471   38680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0831 22:39:17.733900   38680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:39:17.754099   38680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0831 22:39:17.773312   38680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 22:39:17.792847   38680 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0831 22:39:17.797963   38680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:39:17.955439   38680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:39:17.970324   38680 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517 for IP: 192.168.39.137
	I0831 22:39:17.970348   38680 certs.go:194] generating shared ca certs ...
	I0831 22:39:17.970363   38680 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:17.970501   38680 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 22:39:17.970573   38680 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 22:39:17.970590   38680 certs.go:256] generating profile certs ...
	I0831 22:39:17.970697   38680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/client.key
	I0831 22:39:17.970732   38680 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56
	I0831 22:39:17.970747   38680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137 192.168.39.61 192.168.39.26 192.168.39.254]
	I0831 22:39:18.083143   38680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 ...
	I0831 22:39:18.083186   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56: {Name:mk489dd79b841ee44fa8d66455c5fed8039b89dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083399   38680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 ...
	I0831 22:39:18.083417   38680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56: {Name:mkbcff44832282605e436763bcf5c32528ce79a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:39:18.083523   38680 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt
	I0831 22:39:18.083680   38680 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key.3e727c56 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key
	I0831 22:39:18.083806   38680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key
	I0831 22:39:18.083821   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 22:39:18.083834   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 22:39:18.083847   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:39:18.083860   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:39:18.083873   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 22:39:18.083885   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 22:39:18.083901   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 22:39:18.083913   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 22:39:18.083956   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 22:39:18.083983   38680 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 22:39:18.083992   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:39:18.084015   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:39:18.084037   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:39:18.084058   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 22:39:18.084099   38680 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 22:39:18.084124   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.084138   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.084150   38680 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.084726   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:39:18.111136   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:39:18.134120   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:39:18.157775   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:39:18.181362   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0831 22:39:18.205148   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:39:18.229117   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:39:18.252441   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/ha-957517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:39:18.276005   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 22:39:18.298954   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:39:18.321901   38680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 22:39:18.345593   38680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:39:18.363290   38680 ssh_runner.go:195] Run: openssl version
	I0831 22:39:18.369103   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 22:39:18.379738   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384052   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.384104   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 22:39:18.389812   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 22:39:18.399006   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 22:39:18.409817   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414246   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.414294   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 22:39:18.419998   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 22:39:18.429270   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:39:18.439988   38680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444351   38680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.444394   38680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:39:18.450124   38680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:39:18.459442   38680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:39:18.463809   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 22:39:18.469261   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 22:39:18.474818   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 22:39:18.480052   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 22:39:18.485805   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 22:39:18.490982   38680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 22:39:18.496430   38680 kubeadm.go:392] StartCluster: {Name:ha-957517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-957517 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.61 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.109 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:39:18.496538   38680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:39:18.496594   38680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:39:18.540006   38680 cri.go:89] found id: "74033e6de6f78771aa278fb3bf2337b2694d3624100dd7e11f196f8efd688612"
	I0831 22:39:18.540034   38680 cri.go:89] found id: "829e2803166e8b4f563134db85ca290dee0f761c7f98598b5808a7653b837f29"
	I0831 22:39:18.540039   38680 cri.go:89] found id: "ce5a5113d787c6fa00a34027dbed5a4c4a2879f803312b2f06a9b73b7fabb497"
	I0831 22:39:18.540042   38680 cri.go:89] found id: "4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e"
	I0831 22:39:18.540044   38680 cri.go:89] found id: "0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6"
	I0831 22:39:18.540047   38680 cri.go:89] found id: "c7f58140d03288f0be44202d2983095d86acac5de80c884e4f461a5089c26c74"
	I0831 22:39:18.540050   38680 cri.go:89] found id: "35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23"
	I0831 22:39:18.540052   38680 cri.go:89] found id: "b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d"
	I0831 22:39:18.540055   38680 cri.go:89] found id: "883967c8cb80728f7470c0914f33ed4b393693567489f52525c22b793b4d34fe"
	I0831 22:39:18.540061   38680 cri.go:89] found id: "e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3"
	I0831 22:39:18.540073   38680 cri.go:89] found id: "f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18"
	I0831 22:39:18.540077   38680 cri.go:89] found id: "179da26791305cac07ebda53e50261954f96716bff5dd1951b202d9b74dd1b2d"
	I0831 22:39:18.540081   38680 cri.go:89] found id: "f4284e308e02aa0c60596b4f69ed7970f7e1b3a24ed152a48443071082cb3899"
	I0831 22:39:18.540085   38680 cri.go:89] found id: ""
	I0831 22:39:18.540125   38680 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.718044276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144668718019938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adb19972-ffe5-4ea7-a948-4f757cb2ca09 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.718975686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96501ad8-e852-4c82-9687-4be3d3504026 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.719034843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96501ad8-e852-4c82-9687-4be3d3504026 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.719450252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725144605489921783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a31e0757351d84df18a094a695953536610aa5f87324a0b4494008de72bdda,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725144590377066501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:e7ce840d2d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51
d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e5
4e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0d
a-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annot
ations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92ef
fa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96501ad8-e852-4c82-9687-4be3d3504026 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.761778989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cea7963-a1b8-4650-b753-4055e78317fd name=/runtime.v1.RuntimeService/Version
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.761857094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cea7963-a1b8-4650-b753-4055e78317fd name=/runtime.v1.RuntimeService/Version
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.762922286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faa755dc-3506-48e4-a070-6343c54fe492 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.763439277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144668763348531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faa755dc-3506-48e4-a070-6343c54fe492 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.763911330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=257ebc0d-8ae9-47b9-923c-9cb696cf4f88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.763962962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=257ebc0d-8ae9-47b9-923c-9cb696cf4f88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.764311077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725144605489921783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a31e0757351d84df18a094a695953536610aa5f87324a0b4494008de72bdda,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725144590377066501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:e7ce840d2d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51
d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e5
4e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0d
a-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annot
ations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92ef
fa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=257ebc0d-8ae9-47b9-923c-9cb696cf4f88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.805137388Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ba1d555-5a85-495e-803b-951d0b501143 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.805218671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ba1d555-5a85-495e-803b-951d0b501143 name=/runtime.v1.RuntimeService/Version
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.806217331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=778ed56b-d4ed-44fa-8d0f-3d99a4492be1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.806732914Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144668806708577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=778ed56b-d4ed-44fa-8d0f-3d99a4492be1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.807333119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1175cf7-48c7-421f-89a8-7958b7cee1d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.807447816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1175cf7-48c7-421f-89a8-7958b7cee1d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.807810725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725144605489921783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a31e0757351d84df18a094a695953536610aa5f87324a0b4494008de72bdda,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725144590377066501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:e7ce840d2d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51
d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e5
4e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0d
a-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annot
ations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92ef
fa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1175cf7-48c7-421f-89a8-7958b7cee1d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.850171526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d201075d-dcc5-4cd6-8cd7-b7bc610479ce name=/runtime.v1.RuntimeService/Version
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.850248278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d201075d-dcc5-4cd6-8cd7-b7bc610479ce name=/runtime.v1.RuntimeService/Version
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.851502408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bb52ed7-6461-46c9-970f-60915c9646d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.851950166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144668851923627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bb52ed7-6461-46c9-970f-60915c9646d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.852614447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54ca1943-8308-4b50-98c2-f21163b95b1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.852666752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54ca1943-8308-4b50-98c2-f21163b95b1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 22:51:08 ha-957517 crio[3554]: time="2024-08-31 22:51:08.853043877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e,PodSandboxId:3c9af8992e786ec49ad6c246f41a1c5c3ba066b343fc63a5d93b2cf45cc5682e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725144605489921783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45a6ec4251f5958391b270ae9be8513b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a31e0757351d84df18a094a695953536610aa5f87324a0b4494008de72bdda,PodSandboxId:0b38c1d912e18bfba10cfeab2f0eba776d91f410552c27cc13ec7ab1033e2e8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725144590377066501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b828130a-54f5-4449-9ff5-e47b4236c0dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725144007373617828,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e54e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2858979b6470489253d2c47268cbb3af1b867ae9eb4aacdea03a1cf65951445,PodSandboxId:3b0c514f045e8d53701e335bac5083f4f45474622b2fa5fb448199345d4ef565,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725143997643658312,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0da-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52e663cfc090e12a86f1e63580cdd811e0b6b5e9752047abd9507be38868b41,PodSandboxId:f30821f6fbe0a4eda38e5b61c3b2c7142c183bbf08f61acaa7c428000d7289e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725143979332093415,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f1c7545d833d2b7ea7603fdf6d1afb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064,PodSandboxId:52b89349255db2047cc63cc162a783e9572f41726af36bb85f9101190217f7d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725143964569869067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463,PodSandboxId:69681cb02b75358a3d32a57b923bd2df3bf769bb0c22b24f57363ea99ce09d61,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725143964429541467,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:e7ce840d2d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908,PodSandboxId:79c31968d44d201f591a39d7036f13985dc6366f38b51484f1954643848127b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964480894544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab,PodSandboxId:92ce9fbbebea89342247adb7deae64d4b4ac67c158d4b2bc3be02c78a7ad04d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725143964364812871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb,PodSandboxId:10c87af2fbc6e4fe63d16539eda0e751ae82fd50527c79860adb90f7a0ea2a0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725143964302961501,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321,PodSandboxId:bb4ef0b4cc8814af77b1e030bebc02824095fa732b4177ea24a9a0cc9f36674d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725143964227532626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51
d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef,PodSandboxId:a9b03c09aefd7791cd208a13de1277410ec89a7c80f778cff943fba934914e38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725143964150106858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f199e5
4e5de474bccab17312a8e8a1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc9ea3c2c4cc4ce65da68ecba379f82f5fdb3d067cee11293e38faeb433a0f00,PodSandboxId:9f283cd54a11fc8de239a3d9c11e3f2f2bcc3a85400f289206bef8640582ce20,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725143468325984858,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zdnwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4c669b0-a0d
a-4c7e-bc9a-976009a0ee37,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e,PodSandboxId:6e863e5cd9b9c24f6e84c69968a92b475b9fd97105d95f2c6b818bc7e06a0171,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322858113843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k7rsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b16969-bc2e-4ad9-b6c3-20b6d6775159,},Annot
ations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6,PodSandboxId:298283fc5c9c20cfc2473f573a0e739e80920a717bbdb1347517fff32674d60d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725143322792619676,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pc7gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a20dc0e7-f1d3-4fca-9dab-e93224a8b342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23,PodSandboxId:37828bdcd38b5c22f6596cf8a3d41898e27f5d1f6f3dcae5365a07164ae41687,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92ef
fa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725143310935687703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tkvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe590fb-e049-4622-8702-01e32fd77c4e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d,PodSandboxId:99877abcdf5a7fa4d6db1434eb34bf2ff41d7e32e84c88422b2801b4592aa5ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725143307100050549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrp64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4ac77de-bd1e-4fc5-902e-16f0b5de614c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3,PodSandboxId:144e67a21ecaa63a20ffdfaa640d2a5564220a1e8f94a2d65d939f0dafcfaebc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725143295443807388,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09972f10319bc0c3a74ffeb6bb3a4841,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18,PodSandboxId:960ae9b08a3eed15cc4e08b64e922e9784d66704ae21a2155099128efaadda1b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc
06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725143295412326794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-957517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676db26fc51d314abff76b324bee52f0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54ca1943-8308-4b50-98c2-f21163b95b1e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6845d8a0b5ce1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Exited              kube-apiserver            4                   3c9af8992e786       kube-apiserver-ha-957517
	e6a31e0757351       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       5                   0b38c1d912e18       storage-provisioner
	c20207ed446d6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      11 minutes ago       Running             kube-controller-manager   2                   a9b03c09aefd7       kube-controller-manager-ha-957517
	b2858979b6470       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      11 minutes ago       Running             busybox                   1                   3b0c514f045e8       busybox-7dff88458-zdnwd
	f52e663cfc090       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      11 minutes ago       Running             kube-vip                  0                   f30821f6fbe0a       kube-vip-ha-957517
	7c0a265fbf500       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      11 minutes ago       Running             kube-proxy                1                   52b89349255db       kube-proxy-xrp64
	e7ce840d2d77d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago       Running             coredns                   1                   79c31968d44d2       coredns-6f6b679f8f-k7rsc
	bc02ceedf7190       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      11 minutes ago       Running             kindnet-cni               1                   69681cb02b753       kindnet-tkvsc
	94b314de8f0f5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago       Running             coredns                   1                   92ce9fbbebea8       coredns-6f6b679f8f-pc7gn
	06800a2b4052c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      11 minutes ago       Running             kube-scheduler            1                   10c87af2fbc6e       kube-scheduler-ha-957517
	5a9df191ac669       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago       Running             etcd                      1                   bb4ef0b4cc881       etcd-ha-957517
	97642a4900a4f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      11 minutes ago       Exited              kube-controller-manager   1                   a9b03c09aefd7       kube-controller-manager-ha-957517
	dc9ea3c2c4cc4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   20 minutes ago       Exited              busybox                   0                   9f283cd54a11f       busybox-7dff88458-zdnwd
	4a85b32a796fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago       Exited              coredns                   0                   6e863e5cd9b9c       coredns-6f6b679f8f-k7rsc
	0cfba67fe9abb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago       Exited              coredns                   0                   298283fc5c9c2       coredns-6f6b679f8f-pc7gn
	35cc0bc2b6243       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    22 minutes ago       Exited              kindnet-cni               0                   37828bdcd38b5       kindnet-tkvsc
	b1a123f41fac1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      22 minutes ago       Exited              kube-proxy                0                   99877abcdf5a7       kube-proxy-xrp64
	e1c6a4e36ddb2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      22 minutes ago       Exited              kube-scheduler            0                   144e67a21ecaa       kube-scheduler-ha-957517
	f3ae732e5626c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago       Exited              etcd                      0                   960ae9b08a3ee       etcd-ha-957517
	
	
	==> coredns [0cfba67fe9abb8724b91e17edf82b35e6283736231966549ac8ab34e0ea983b6] <==
	[INFO] 10.244.0.4:36544 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002043312s
	[INFO] 10.244.1.2:34999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003609s
	[INFO] 10.244.1.2:45741 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017294944s
	[INFO] 10.244.1.2:57093 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224681s
	[INFO] 10.244.2.2:49538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000358252s
	[INFO] 10.244.2.2:53732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00185161s
	[INFO] 10.244.2.2:41165 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231402s
	[INFO] 10.244.2.2:60230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118116s
	[INFO] 10.244.2.2:42062 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000271609s
	[INFO] 10.244.0.4:49034 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067938s
	[INFO] 10.244.0.4:36002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196492s
	[INFO] 10.244.1.2:54186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124969s
	[INFO] 10.244.1.2:47709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000506218s
	[INFO] 10.244.0.4:54205 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087475s
	[INFO] 10.244.0.4:48802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055159s
	[INFO] 10.244.1.2:46825 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148852s
	[INFO] 10.244.2.2:60523 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183145s
	[INFO] 10.244.0.4:53842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116944s
	[INFO] 10.244.0.4:56291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000217808s
	[INFO] 10.244.0.4:53612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00028657s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	
	
	==> coredns [4a85b32a796fb79091080f0060837869f041154c8725818967f2cf8873a8fa2e] <==
	[INFO] 10.244.0.4:43334 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001723638s
	[INFO] 10.244.0.4:54010 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080627s
	[INFO] 10.244.0.4:47700 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424459s
	[INFO] 10.244.0.4:50346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070487s
	[INFO] 10.244.0.4:43522 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051146s
	[INFO] 10.244.1.2:60157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099584s
	[INFO] 10.244.1.2:48809 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104515s
	[INFO] 10.244.2.2:37042 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132626s
	[INFO] 10.244.2.2:38343 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117546s
	[INFO] 10.244.2.2:53716 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092804s
	[INFO] 10.244.2.2:59881 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068808s
	[INFO] 10.244.0.4:40431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093051s
	[INFO] 10.244.0.4:39552 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087951s
	[INFO] 10.244.1.2:59301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113713s
	[INFO] 10.244.1.2:40299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210744s
	[INFO] 10.244.1.2:54276 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210063s
	[INFO] 10.244.2.2:34222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307653s
	[INFO] 10.244.2.2:42028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089936s
	[INFO] 10.244.2.2:47927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066426s
	[INFO] 10.244.0.4:39601 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085891s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [94b314de8f0f52dda8b7bdd1ae66592ab2cdeb6539fcf4dedcce6b24d0e8c0ab] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[496122418]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:33.951) (total time: 13002ms):
	Trace[496122418]: ---"Objects listed" error:Unauthorized 13002ms (22:50:46.953)
	Trace[496122418]: [13.002529105s] [13.002529105s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1084443713]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:49.532) (total time: 11460ms):
	Trace[1084443713]: ---"Objects listed" error:Unauthorized 11460ms (22:51:00.992)
	Trace[1084443713]: [11.460286316s] [11.460286316s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1001830734]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:48.776) (total time: 12228ms):
	Trace[1001830734]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 12228ms (22:51:01.005)
	Trace[1001830734]: [12.228198155s] [12.228198155s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3267": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3273": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3273": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: Trace[2127071828]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:48.992) (total time: 15646ms):
	Trace[2127071828]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3267": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 15645ms (22:51:04.637)
	Trace[2127071828]: [15.646007587s] [15.646007587s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3267": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3271": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3271": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [e7ce840d2d77d4b340327fa2e3d7dd25a03827a2c1b11bc859a72e1092b67908] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[100575288]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:36.078) (total time: 10874ms):
	Trace[100575288]: ---"Objects listed" error:Unauthorized 10874ms (22:50:46.952)
	Trace[100575288]: [10.874890913s] [10.874890913s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1804299408]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:34.807) (total time: 12147ms):
	Trace[1804299408]: ---"Objects listed" error:Unauthorized 12147ms (22:50:46.955)
	Trace[1804299408]: [12.147914786s] [12.147914786s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[811762817]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:35.734) (total time: 11221ms):
	Trace[811762817]: ---"Objects listed" error:Unauthorized 11221ms (22:50:46.955)
	Trace[811762817]: [11.221430938s] [11.221430938s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3365": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3365": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3445": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3445": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3350": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[2045035414]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 22:50:51.867) (total time: 10138ms):
	Trace[2045035414]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3350": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 10138ms (22:51:02.005)
	Trace[2045035414]: [10.138348374s] [10.138348374s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3350": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug31 22:28] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.064763] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057170] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.193531] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.118523] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.278233] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.003192] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.620544] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058441] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.958169] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083987] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.815006] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.616164] kauditd_printk_skb: 38 callbacks suppressed
	[Aug31 22:29] kauditd_printk_skb: 24 callbacks suppressed
	[Aug31 22:39] systemd-fstab-generator[3479]: Ignoring "noauto" option for root device
	[  +0.149859] systemd-fstab-generator[3491]: Ignoring "noauto" option for root device
	[  +0.177091] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +0.139553] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	[  +0.274919] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.761462] systemd-fstab-generator[3640]: Ignoring "noauto" option for root device
	[  +3.640979] kauditd_printk_skb: 122 callbacks suppressed
	[ +14.497757] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.070411] kauditd_printk_skb: 1 callbacks suppressed
	[Aug31 22:40] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.831333] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [5a9df191ac6697cdf05d96d165f76ef623f86fca3fe572d070d052acdc1fb321] <==
	{"level":"info","ts":"2024-08-31T22:51:05.118913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:05.119023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:05.119058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:05.119091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a [logterm: 3, index: 4040] sent MsgPreVote request to f4a5e8bd42e87b19 at term 3"}
	{"level":"warn","ts":"2024-08-31T22:51:05.181736Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f4a5e8bd42e87b19","rtt":"926.487µs","error":"dial tcp 192.168.39.61:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-31T22:51:05.181835Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f4a5e8bd42e87b19","rtt":"8.33668ms","error":"dial tcp 192.168.39.61:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-31T22:51:05.444410Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9748764505275198534,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-31T22:51:05.945147Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9748764505275198534,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-31T22:51:06.446048Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9748764505275198534,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-31T22:51:06.619263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:06.619326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:06.619341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:06.619356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a [logterm: 3, index: 4040] sent MsgPreVote request to f4a5e8bd42e87b19 at term 3"}
	{"level":"warn","ts":"2024-08-31T22:51:06.947247Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9748764505275198534,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-31T22:51:07.448041Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9748764505275198534,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-31T22:51:07.937097Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-08-31T22:51:07.937195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.999720819s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-31T22:51:07.937221Z","caller":"traceutil/trace.go:171","msg":"trace[1531298554] range","detail":"{range_begin:; range_end:; }","duration":"13.999765712s","start":"2024-08-31T22:50:53.937445Z","end":"2024-08-31T22:51:07.937211Z","steps":["trace[1531298554] 'agreement among raft nodes before linearized reading'  (duration: 13.999719678s)"],"step_count":1}
	{"level":"error","ts":"2024-08-31T22:51:07.937279Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-31T22:51:08.119019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:08.119129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:08.119162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-08-31T22:51:08.119209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a [logterm: 3, index: 4040] sent MsgPreVote request to f4a5e8bd42e87b19 at term 3"}
	{"level":"warn","ts":"2024-08-31T22:51:08.439606Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9748764505275198535,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-31T22:51:08.940554Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9748764505275198535,"retry-timeout":"500ms"}
	
	
	==> etcd [f3ae732e5626c43a422935223a2a5d7802424061bc026fe60ce4eb5d31701c18] <==
	2024/08/31 22:37:44 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/31 22:37:44 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-31T22:37:44.867346Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T22:37:44.867485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-31T22:37:44.867596Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"5527995f6263874a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-31T22:37:44.867783Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.867840Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.867900Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868053Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868113Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868180Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868211Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a30a86b86970552"}
	{"level":"info","ts":"2024-08-31T22:37:44.868234Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868277Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868325Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868480Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868535Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868564Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5527995f6263874a","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.868593Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f4a5e8bd42e87b19"}
	{"level":"info","ts":"2024-08-31T22:37:44.871798Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"warn","ts":"2024-08-31T22:37:44.871868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.676535677s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-31T22:37:44.871944Z","caller":"traceutil/trace.go:171","msg":"trace[1670798017] range","detail":"{range_begin:; range_end:; }","duration":"8.676624743s","start":"2024-08-31T22:37:36.195311Z","end":"2024-08-31T22:37:44.871936Z","steps":["trace[1670798017] 'agreement among raft nodes before linearized reading'  (duration: 8.676534851s)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:37:44.871994Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-08-31T22:37:44.872025Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-957517","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	{"level":"error","ts":"2024-08-31T22:37:44.872015Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 22:51:09 up 23 min,  0 users,  load average: 0.57, 0.62, 0.45
	Linux ha-957517 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35cc0bc2b6243b713ef33003aeded2321764f7ea3a9ed0dd8a7f26f845b28a23] <==
	I0831 22:37:11.965462       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:21.965624       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:21.965663       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:37:21.965778       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:21.965799       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:21.965887       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:21.965908       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:21.965961       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:21.965983       1 main.go:299] handling current node
	I0831 22:37:31.963598       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:31.963696       1 main.go:299] handling current node
	I0831 22:37:31.963726       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:31.963744       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:37:31.963981       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:31.964015       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:31.964101       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:31.964130       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:41.972192       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0831 22:37:41.972307       1 main.go:322] Node ha-957517-m03 has CIDR [10.244.2.0/24] 
	I0831 22:37:41.972549       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:37:41.972573       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:37:41.972674       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:37:41.972700       1 main.go:299] handling current node
	I0831 22:37:41.972724       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:37:41.972729       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [bc02ceedf71902407a852937684013ffd061d4a347fc13eeb31f2d9738e8b463] <==
	I0831 22:50:35.766339       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:50:35.766346       1 main.go:299] handling current node
	I0831 22:50:35.766358       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:50:35.766362       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	E0831 22:50:39.937127       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: dial tcp 10.96.0.1:443: i/o timeout
	I0831 22:50:45.762019       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:50:45.762161       1 main.go:299] handling current node
	I0831 22:50:45.762195       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:50:45.762214       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:50:45.762352       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:50:45.762449       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:50:55.769021       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:50:55.769050       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:50:55.769205       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:50:55.769234       1 main.go:299] handling current node
	I0831 22:50:55.769258       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:50:55.769263       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	W0831 22:51:03.750846       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3437": dial tcp 10.96.0.1:443: connect: connection refused
	E0831 22:51:03.750915       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3437": dial tcp 10.96.0.1:443: connect: connection refused
	I0831 22:51:05.761439       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0831 22:51:05.761503       1 main.go:322] Node ha-957517-m02 has CIDR [10.244.1.0/24] 
	I0831 22:51:05.761650       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0831 22:51:05.761756       1 main.go:322] Node ha-957517-m04 has CIDR [10.244.3.0/24] 
	I0831 22:51:05.761853       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0831 22:51:05.761860       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e] <==
	E0831 22:51:00.992158       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.950930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Endpoints: etcdserver: request timed out
	E0831 22:51:00.992192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RoleBinding: etcdserver: request timed out
	E0831 22:51:00.992205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: failed to list *v1.RoleBinding: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.LimitRange: etcdserver: request timed out
	E0831 22:51:00.992217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.LimitRange: failed to list *v1.LimitRange: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ClusterRoleBinding: etcdserver: request timed out
	E0831 22:51:00.992229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: etcdserver: request timed out
	E0831 22:51:00.992239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PriorityClass: etcdserver: request timed out
	E0831 22:51:00.992250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityClass: failed to list *v1.PriorityClass: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964737       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: etcdserver: request timed out
	E0831 22:51:00.992280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: etcdserver: request timed out
	E0831 22:51:00.992292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ServiceAccount: etcdserver: request timed out
	E0831 22:51:00.992303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.964883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PriorityLevelConfiguration: etcdserver: request timed out
	E0831 22:51:00.992332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.965232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: etcdserver: request timed out
	E0831 22:51:00.992345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: etcdserver: request timed out" logger="UnhandledError"
	W0831 22:51:00.965252       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ResourceQuota: etcdserver: request timed out
	E0831 22:51:00.992356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ResourceQuota: failed to list *v1.ResourceQuota: etcdserver: request timed out" logger="UnhandledError"
	
	
	==> kube-controller-manager [97642a4900a4fa0c1380c8d5c651cbb21c92e3acdbf1a27ad73ead678d0b9cef] <==
	I0831 22:39:25.523610       1 serving.go:386] Generated self-signed cert in-memory
	I0831 22:39:26.109857       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0831 22:39:26.110071       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:39:26.113096       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 22:39:26.113292       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 22:39:26.113913       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0831 22:39:26.114016       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0831 22:39:46.790111       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.137:8443/healthz\": dial tcp 192.168.39.137:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c20207ed446d61d54da423cbcaaa6bf4fc20f68c36fb09c70a51045b7d3059d7] <==
	E0831 22:51:04.875810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ControllerRevision: failed to list *v1.ControllerRevision: Get \"https://192.168.39.137:8443/apis/apps/v1/controllerrevisions?resourceVersion=3411\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:05.419364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicy: Get "https://192.168.39.137:8443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies?resourceVersion=3414": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:05.419521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicy: failed to list *v1.ValidatingAdmissionPolicy: Get \"https://192.168.39.137:8443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies?resourceVersion=3414\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:05.914338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.137:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=3427": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:05.914559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.137:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=3427\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:06.991233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.137:8443/api/v1/services?resourceVersion=3384": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:06.991431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.137:8443/api/v1/services?resourceVersion=3384\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:07.477856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RoleBinding: Get "https://192.168.39.137:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=3377": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:07.477947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: failed to list *v1.RoleBinding: Get \"https://192.168.39.137:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=3377\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:07.827331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.137:8443/api/v1/pods?resourceVersion=3437": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:07.827526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.137:8443/api/v1/pods?resourceVersion=3437\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:08.248947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CronJob: Get "https://192.168.39.137:8443/apis/batch/v1/cronjobs?resourceVersion=3347": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:08.249037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CronJob: failed to list *v1.CronJob: Get \"https://192.168.39.137:8443/apis/batch/v1/cronjobs?resourceVersion=3347\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:09.024606       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.137:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:09.116902       1 gc_controller.go:151] "Failed to get node" err="node \"ha-957517-m03\" not found" logger="pod-garbage-collector-controller" node="ha-957517-m03"
	E0831 22:51:09.116943       1 gc_controller.go:151] "Failed to get node" err="node \"ha-957517-m03\" not found" logger="pod-garbage-collector-controller" node="ha-957517-m03"
	E0831 22:51:09.116950       1 gc_controller.go:151] "Failed to get node" err="node \"ha-957517-m03\" not found" logger="pod-garbage-collector-controller" node="ha-957517-m03"
	E0831 22:51:09.116955       1 gc_controller.go:151] "Failed to get node" err="node \"ha-957517-m03\" not found" logger="pod-garbage-collector-controller" node="ha-957517-m03"
	E0831 22:51:09.116960       1 gc_controller.go:151] "Failed to get node" err="node \"ha-957517-m03\" not found" logger="pod-garbage-collector-controller" node="ha-957517-m03"
	W0831 22:51:09.117511       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.137:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.137:8443: connect: connection refused
	W0831 22:51:09.180437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://192.168.39.137:8443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3377": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:09.180504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://192.168.39.137:8443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3377\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:09.437039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.137:8443/api/v1/persistentvolumes?resourceVersion=3445": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:09.437102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.137:8443/api/v1/persistentvolumes?resourceVersion=3445\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:09.525501       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.137:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.137:8443: connect: connection refused
	
	
	==> kube-proxy [7c0a265fbf500452e8b8475e6d1c20c3599236d92e4d7aabdeb673bdc6bf6064] <==
	E0831 22:39:26.911344       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:29.982883       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:33.055492       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:39.201473       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0831 22:39:51.487041       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0831 22:40:10.488958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E0831 22:40:10.489188       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:40:10.537261       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:40:10.537354       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:40:10.537501       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:40:10.542452       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:40:10.542992       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:40:10.543052       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:40:10.545123       1 config.go:197] "Starting service config controller"
	I0831 22:40:10.545206       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:40:10.545249       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:40:10.545277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:40:10.546250       1 config.go:326] "Starting node config controller"
	I0831 22:40:10.546438       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:40:10.646348       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:40:10.646448       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:40:10.646525       1 shared_informer.go:320] Caches are synced for node config
	E0831 22:50:40.895201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dha-957517&resourceVersion=3358&timeout=6m58s&timeoutSeconds=418&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host - error from a previous attempt: dial tcp 192.168.39.254:8443: i/o timeout" logger="UnhandledError"
	E0831 22:50:40.895483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3434&timeout=5m11s&timeoutSeconds=311&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0831 22:50:43.967420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3315&timeout=6m34s&timeoutSeconds=394&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host - error from a previous attempt: dial tcp 192.168.39.254:8443: i/o timeout" logger="UnhandledError"
	
	
	==> kube-proxy [b1a123f41fac1e98e1903e7d04ab80b1962c73df885d70b767da45e73063a06d] <==
	E0831 22:36:21.374054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:21.374474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:21.374644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:29.181771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0831 22:36:29.181910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:29.181978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:29.182082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:29.182138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0831 22:36:29.183726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.022821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.022959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.023030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.023072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:41.023128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:41.023169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:56.381866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:56.382427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:36:56.382547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:36:56.382584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:05.598350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:05.598532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:33.246415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:33.246482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1845\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0831 22:37:39.390329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0831 22:37:39.390566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-957517&resourceVersion=1844\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [06800a2b4052cdfe1cf999a142ca15bdc3a04e0f6a055071342de3a3041b1cdb] <==
	E0831 22:50:38.518539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:39.180911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:50:39.180972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:44.013599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:50:44.013639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:44.692288       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:50:44.692493       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:50:46.528591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:50:46.528648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:47.862611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:50:47.862681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:48.328594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:50:48.328658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:51.840356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0831 22:50:51.840564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:56.737455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:50:56.737575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:50:58.085750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:50:58.085929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:51:04.062789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.137:8443/api/v1/services?resourceVersion=3368": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:04.062864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.137:8443/api/v1/services?resourceVersion=3368\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:09.442242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.137:8443/apis/apps/v1/statefulsets?resourceVersion=3377": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:09.442309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.137:8443/apis/apps/v1/statefulsets?resourceVersion=3377\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	W0831 22:51:09.522622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.137:8443/api/v1/persistentvolumeclaims?resourceVersion=3369": dial tcp 192.168.39.137:8443: connect: connection refused
	E0831 22:51:09.522687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.137:8443/api/v1/persistentvolumeclaims?resourceVersion=3369\": dial tcp 192.168.39.137:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [e1c6a4e36ddb26cdf63d89a5de2dbbb139b1abcdbdd997af831755273b9dbdd3] <==
	E0831 22:31:40.726228       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2t9r8\": pod kindnet-2t9r8 is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-2t9r8"
	I0831 22:31:40.726253       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2t9r8" node="ha-957517-m04"
	E0831 22:31:40.731781       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:31:40.731866       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3457f0a0-fd3b-4e40-819f-9d57c29036e6(kube-system/kindnet-mljxh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mljxh"
	E0831 22:31:40.731884       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mljxh\": pod kindnet-mljxh is already assigned to node \"ha-957517-m04\"" pod="kube-system/kindnet-mljxh"
	I0831 22:31:40.731900       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mljxh" node="ha-957517-m04"
	E0831 22:37:32.346967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0831 22:37:32.516236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0831 22:37:35.083345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0831 22:37:35.963186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0831 22:37:36.540206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0831 22:37:36.648180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0831 22:37:37.049684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:38.229658       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0831 22:37:38.502251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0831 22:37:38.836806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:39.474109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:39.769349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0831 22:37:41.935022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0831 22:37:42.328066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0831 22:37:43.701831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	I0831 22:37:44.800445       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0831 22:37:44.800584       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0831 22:37:44.800769       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0831 22:37:44.803420       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 31 22:50:53 ha-957517 kubelet[1303]: E0831 22:50:53.181720    1303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-957517?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Aug 31 22:50:53 ha-957517 kubelet[1303]: I0831 22:50:53.181824    1303 status_manager.go:851] "Failed to get status for pod" podUID="45a6ec4251f5958391b270ae9be8513b" pod="kube-system/kube-apiserver-ha-957517" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:50:53 ha-957517 kubelet[1303]: E0831 22:50:53.181833    1303 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/etcd-ha-957517.17f0f11b9cb69a2d\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{etcd-ha-957517.17f0f11b9cb69a2d  kube-system   2011 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-ha-957517,UID:676db26fc51d314abff76b324bee52f0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:ha-957517,},FirstTimestamp:2024-08-31 22:35:58 +0000 UTC,LastTimestamp:2024-08-31 22:48:26.905806061 +0000 UTC m=+1205.669924111,Count:10,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingCon
troller:kubelet,ReportingInstance:ha-957517,}"
	Aug 31 22:50:56 ha-957517 kubelet[1303]: I0831 22:50:56.253829    1303 status_manager.go:851] "Failed to get status for pod" podUID="b828130a-54f5-4449-9ff5-e47b4236c0dc" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:50:58 ha-957517 kubelet[1303]: I0831 22:50:58.362794    1303 scope.go:117] "RemoveContainer" containerID="e6a31e0757351d84df18a094a695953536610aa5f87324a0b4494008de72bdda"
	Aug 31 22:50:58 ha-957517 kubelet[1303]: E0831 22:50:58.363012    1303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b828130a-54f5-4449-9ff5-e47b4236c0dc)\"" pod="kube-system/storage-provisioner" podUID="b828130a-54f5-4449-9ff5-e47b4236c0dc"
	Aug 31 22:50:59 ha-957517 kubelet[1303]: I0831 22:50:59.325936    1303 status_manager.go:851] "Failed to get status for pod" podUID="676db26fc51d314abff76b324bee52f0" pod="kube-system/etcd-ha-957517" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:50:59 ha-957517 kubelet[1303]: E0831 22:50:59.326843    1303 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-08-31T22:50:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-31T22:50:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-31T22:50:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-31T22:50:57Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ha-957517\": Patch \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517/status?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:51:01 ha-957517 kubelet[1303]: I0831 22:51:01.228083    1303 scope.go:117] "RemoveContainer" containerID="c9a7461b1cbf9ec060a7465c45a2b567221211e23e03c97f4a9a7d27357126a7"
	Aug 31 22:51:01 ha-957517 kubelet[1303]: I0831 22:51:01.228484    1303 scope.go:117] "RemoveContainer" containerID="6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e"
	Aug 31 22:51:01 ha-957517 kubelet[1303]: E0831 22:51:01.228620    1303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-957517_kube-system(45a6ec4251f5958391b270ae9be8513b)\"" pod="kube-system/kube-apiserver-ha-957517" podUID="45a6ec4251f5958391b270ae9be8513b"
	Aug 31 22:51:01 ha-957517 kubelet[1303]: E0831 22:51:01.783094    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144661782643542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:51:01 ha-957517 kubelet[1303]: E0831 22:51:01.783120    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144661782643542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:51:02 ha-957517 kubelet[1303]: E0831 22:51:02.397886    1303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-957517?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Aug 31 22:51:02 ha-957517 kubelet[1303]: E0831 22:51:02.397870    1303 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/etcd-ha-957517.17f0f11b9cb69a2d\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{etcd-ha-957517.17f0f11b9cb69a2d  kube-system   2011 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-ha-957517,UID:676db26fc51d314abff76b324bee52f0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:ha-957517,},FirstTimestamp:2024-08-31 22:35:58 +0000 UTC,LastTimestamp:2024-08-31 22:48:26.905806061 +0000 UTC m=+1205.669924111,Count:10,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingCon
troller:kubelet,ReportingInstance:ha-957517,}"
	Aug 31 22:51:02 ha-957517 kubelet[1303]: E0831 22:51:02.397988    1303 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-957517\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:51:02 ha-957517 kubelet[1303]: I0831 22:51:02.398018    1303 status_manager.go:851] "Failed to get status for pod" podUID="45a6ec4251f5958391b270ae9be8513b" pod="kube-system/kube-apiserver-ha-957517" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:51:05 ha-957517 kubelet[1303]: E0831 22:51:05.469859    1303 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-957517\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:51:05 ha-957517 kubelet[1303]: I0831 22:51:05.469848    1303 status_manager.go:851] "Failed to get status for pod" podUID="b828130a-54f5-4449-9ff5-e47b4236c0dc" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:51:07 ha-957517 kubelet[1303]: I0831 22:51:07.226644    1303 scope.go:117] "RemoveContainer" containerID="6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e"
	Aug 31 22:51:07 ha-957517 kubelet[1303]: E0831 22:51:07.226809    1303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-957517_kube-system(45a6ec4251f5958391b270ae9be8513b)\"" pod="kube-system/kube-apiserver-ha-957517" podUID="45a6ec4251f5958391b270ae9be8513b"
	Aug 31 22:51:08 ha-957517 kubelet[1303]: I0831 22:51:08.304113    1303 scope.go:117] "RemoveContainer" containerID="6845d8a0b5ce1e90a0e8b70ef27f9b9df5fb6b055055feb5aa7d30ce00f7323e"
	Aug 31 22:51:08 ha-957517 kubelet[1303]: E0831 22:51:08.304637    1303 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-957517_kube-system(45a6ec4251f5958391b270ae9be8513b)\"" pod="kube-system/kube-apiserver-ha-957517" podUID="45a6ec4251f5958391b270ae9be8513b"
	Aug 31 22:51:08 ha-957517 kubelet[1303]: E0831 22:51:08.542801    1303 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-957517\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-957517?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 31 22:51:08 ha-957517 kubelet[1303]: I0831 22:51:08.542854    1303 status_manager.go:851] "Failed to get status for pod" podUID="676db26fc51d314abff76b324bee52f0" pod="kube-system/etcd-ha-957517" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957517\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 22:51:08.457801   42284 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18943-13149/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-957517 -n ha-957517
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-957517 -n ha-957517: exit status 2 (219.695387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "ha-957517" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (173.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328486
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-328486
E0831 23:04:45.611919   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:59.874625   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-328486: exit status 82 (2m1.84841731s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-328486-m03"  ...
	* Stopping node "multinode-328486-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-328486" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328486 --wait=true -v=8 --alsologtostderr
E0831 23:06:42.547612   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328486 --wait=true -v=8 --alsologtostderr: (3m22.381542298s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328486
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-328486 -n multinode-328486
helpers_test.go:245: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-328486 logs -n 25: (1.520477697s)
helpers_test.go:253: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1488925976/001/cp-test_multinode-328486-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486:/home/docker/cp-test_multinode-328486-m02_multinode-328486.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486 sudo cat                                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m02_multinode-328486.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03:/home/docker/cp-test_multinode-328486-m02_multinode-328486-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486-m03 sudo cat                                   | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m02_multinode-328486-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp testdata/cp-test.txt                                                | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1488925976/001/cp-test_multinode-328486-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486:/home/docker/cp-test_multinode-328486-m03_multinode-328486.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486 sudo cat                                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m03_multinode-328486.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02:/home/docker/cp-test_multinode-328486-m03_multinode-328486-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486-m02 sudo cat                                   | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m03_multinode-328486-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-328486 node stop m03                                                          | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	| node    | multinode-328486 node start                                                             | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-328486                                                                | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC |                     |
	| stop    | -p multinode-328486                                                                     | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC |                     |
	| start   | -p multinode-328486                                                                     | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:06 UTC | 31 Aug 24 23:09 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-328486                                                                | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:09 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 23:06:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 23:06:01.776416   51160 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:06:01.776549   51160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:06:01.776559   51160 out.go:358] Setting ErrFile to fd 2...
	I0831 23:06:01.776565   51160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:06:01.776775   51160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:06:01.777317   51160 out.go:352] Setting JSON to false
	I0831 23:06:01.778349   51160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6509,"bootTime":1725139053,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 23:06:01.778412   51160 start.go:139] virtualization: kvm guest
	I0831 23:06:01.780741   51160 out.go:177] * [multinode-328486] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 23:06:01.782061   51160 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:06:01.782083   51160 notify.go:220] Checking for updates...
	I0831 23:06:01.784548   51160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:06:01.785813   51160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:06:01.787013   51160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:06:01.788388   51160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 23:06:01.789680   51160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:06:01.791646   51160 config.go:182] Loaded profile config "multinode-328486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:06:01.791758   51160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:06:01.792367   51160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:06:01.792465   51160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:06:01.807995   51160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0831 23:06:01.808508   51160 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:06:01.809004   51160 main.go:141] libmachine: Using API Version  1
	I0831 23:06:01.809024   51160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:06:01.809324   51160 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:06:01.809457   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:06:01.848611   51160 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 23:06:01.850113   51160 start.go:297] selected driver: kvm2
	I0831 23:06:01.850131   51160 start.go:901] validating driver "kvm2" against &{Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:06:01.850281   51160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:06:01.850613   51160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:06:01.850695   51160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 23:06:01.866302   51160 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 23:06:01.867016   51160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:06:01.867057   51160 cni.go:84] Creating CNI manager for ""
	I0831 23:06:01.867070   51160 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0831 23:06:01.867139   51160 start.go:340] cluster config:
	{Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-328486 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:06:01.867297   51160 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:06:01.869073   51160 out.go:177] * Starting "multinode-328486" primary control-plane node in "multinode-328486" cluster
	I0831 23:06:01.870698   51160 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:06:01.870742   51160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 23:06:01.870749   51160 cache.go:56] Caching tarball of preloaded images
	I0831 23:06:01.870838   51160 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 23:06:01.870848   51160 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:06:01.870973   51160 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/config.json ...
	I0831 23:06:01.871234   51160 start.go:360] acquireMachinesLock for multinode-328486: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 23:06:01.871289   51160 start.go:364] duration metric: took 33.863µs to acquireMachinesLock for "multinode-328486"
	I0831 23:06:01.871308   51160 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:06:01.871315   51160 fix.go:54] fixHost starting: 
	I0831 23:06:01.871660   51160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:06:01.871694   51160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:06:01.886247   51160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I0831 23:06:01.886690   51160 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:06:01.887258   51160 main.go:141] libmachine: Using API Version  1
	I0831 23:06:01.887279   51160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:06:01.887615   51160 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:06:01.887824   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:06:01.888045   51160 main.go:141] libmachine: (multinode-328486) Calling .GetState
	I0831 23:06:01.889924   51160 fix.go:112] recreateIfNeeded on multinode-328486: state=Running err=<nil>
	W0831 23:06:01.889944   51160 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:06:01.891965   51160 out.go:177] * Updating the running kvm2 "multinode-328486" VM ...
	I0831 23:06:01.893267   51160 machine.go:93] provisionDockerMachine start ...
	I0831 23:06:01.893288   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:06:01.893564   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:01.896284   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:01.896803   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:01.896863   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:01.896974   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:01.897162   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:01.897328   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:01.897431   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:01.897562   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:01.897796   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:01.897813   51160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:06:02.000644   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328486
	
	I0831 23:06:02.000670   51160 main.go:141] libmachine: (multinode-328486) Calling .GetMachineName
	I0831 23:06:02.000906   51160 buildroot.go:166] provisioning hostname "multinode-328486"
	I0831 23:06:02.000930   51160 main.go:141] libmachine: (multinode-328486) Calling .GetMachineName
	I0831 23:06:02.001139   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.003882   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.004244   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.004274   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.004410   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.004580   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.004740   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.004884   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.005047   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:02.005218   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:02.005230   51160 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328486 && echo "multinode-328486" | sudo tee /etc/hostname
	I0831 23:06:02.121699   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328486
	
	I0831 23:06:02.121728   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.124650   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.124979   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.124999   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.125166   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.125353   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.125526   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.125657   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.125874   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:02.126042   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:02.126061   51160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-328486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-328486/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-328486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:06:02.228447   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:06:02.228477   51160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 23:06:02.228515   51160 buildroot.go:174] setting up certificates
	I0831 23:06:02.228526   51160 provision.go:84] configureAuth start
	I0831 23:06:02.228539   51160 main.go:141] libmachine: (multinode-328486) Calling .GetMachineName
	I0831 23:06:02.228825   51160 main.go:141] libmachine: (multinode-328486) Calling .GetIP
	I0831 23:06:02.231186   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.231567   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.231589   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.231709   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.233944   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.234349   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.234379   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.234517   51160 provision.go:143] copyHostCerts
	I0831 23:06:02.234547   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 23:06:02.234582   51160 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 23:06:02.234600   51160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 23:06:02.234665   51160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 23:06:02.234753   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 23:06:02.234770   51160 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 23:06:02.234776   51160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 23:06:02.234800   51160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 23:06:02.234854   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 23:06:02.234870   51160 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 23:06:02.234876   51160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 23:06:02.234897   51160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 23:06:02.234953   51160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.multinode-328486 san=[127.0.0.1 192.168.39.107 localhost minikube multinode-328486]
	I0831 23:06:02.359379   51160 provision.go:177] copyRemoteCerts
	I0831 23:06:02.359431   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:06:02.359451   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.361856   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.362216   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.362238   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.362461   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.362656   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.362811   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.362946   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:06:02.442717   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:06:02.442777   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:06:02.469251   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:06:02.469321   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0831 23:06:02.502439   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:06:02.502506   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 23:06:02.529613   51160 provision.go:87] duration metric: took 301.075477ms to configureAuth
	I0831 23:06:02.529638   51160 buildroot.go:189] setting minikube options for container-runtime
	I0831 23:06:02.529838   51160 config.go:182] Loaded profile config "multinode-328486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:06:02.529899   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.532322   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.532618   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.532647   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.532783   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.532940   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.533078   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.533259   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.533403   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:02.533564   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:02.533583   51160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:07:33.286751   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:07:33.286778   51160 machine.go:96] duration metric: took 1m31.393500147s to provisionDockerMachine
	I0831 23:07:33.286789   51160 start.go:293] postStartSetup for "multinode-328486" (driver="kvm2")
	I0831 23:07:33.286800   51160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:07:33.286815   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.287096   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:07:33.287126   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.290388   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.290787   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.290814   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.290924   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.291092   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.291275   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.291418   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:07:33.376595   51160 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:07:33.381255   51160 command_runner.go:130] > NAME=Buildroot
	I0831 23:07:33.381283   51160 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0831 23:07:33.381288   51160 command_runner.go:130] > ID=buildroot
	I0831 23:07:33.381292   51160 command_runner.go:130] > VERSION_ID=2023.02.9
	I0831 23:07:33.381297   51160 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0831 23:07:33.381320   51160 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 23:07:33.381330   51160 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 23:07:33.381385   51160 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 23:07:33.381461   51160 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 23:07:33.381471   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 23:07:33.381554   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:07:33.391422   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:07:33.416416   51160 start.go:296] duration metric: took 129.614537ms for postStartSetup
	I0831 23:07:33.416455   51160 fix.go:56] duration metric: took 1m31.545140659s for fixHost
	I0831 23:07:33.416473   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.419529   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.419905   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.419939   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.420114   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.420313   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.420479   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.420661   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.420849   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:07:33.421007   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:07:33.421017   51160 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 23:07:33.520202   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725145653.497677918
	
	I0831 23:07:33.520221   51160 fix.go:216] guest clock: 1725145653.497677918
	I0831 23:07:33.520228   51160 fix.go:229] Guest: 2024-08-31 23:07:33.497677918 +0000 UTC Remote: 2024-08-31 23:07:33.416459029 +0000 UTC m=+91.672998730 (delta=81.218889ms)
	I0831 23:07:33.520267   51160 fix.go:200] guest clock delta is within tolerance: 81.218889ms
	I0831 23:07:33.520273   51160 start.go:83] releasing machines lock for "multinode-328486", held for 1m31.648972087s
	I0831 23:07:33.520301   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.520592   51160 main.go:141] libmachine: (multinode-328486) Calling .GetIP
	I0831 23:07:33.523570   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.524087   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.524117   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.524255   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.524746   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.524944   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.525036   51160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:07:33.525084   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.525213   51160 ssh_runner.go:195] Run: cat /version.json
	I0831 23:07:33.525235   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.528060   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.528406   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.528433   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.528452   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.528595   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.528751   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.528914   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.528955   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.528981   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.529049   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:07:33.529312   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.529469   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.529600   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.529804   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:07:33.627681   51160 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0831 23:07:33.627748   51160 command_runner.go:130] > {"iso_version": "v1.33.1-1724862017-19530", "kicbase_version": "v0.0.44-1724775115-19521", "minikube_version": "v1.33.1", "commit": "0ce952d110f81b7b94ba20c385955675855b59fb"}
	I0831 23:07:33.627889   51160 ssh_runner.go:195] Run: systemctl --version
	I0831 23:07:33.634129   51160 command_runner.go:130] > systemd 252 (252)
	I0831 23:07:33.634171   51160 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0831 23:07:33.634226   51160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:07:33.794731   51160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:07:33.800849   51160 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0831 23:07:33.800887   51160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 23:07:33.800938   51160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:07:33.810243   51160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:07:33.810265   51160 start.go:495] detecting cgroup driver to use...
	I0831 23:07:33.810335   51160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:07:33.830472   51160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:07:33.846944   51160 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:07:33.846993   51160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:07:33.860617   51160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:07:33.874145   51160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:07:34.040693   51160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:07:34.184641   51160 docker.go:233] disabling docker service ...
	I0831 23:07:34.184716   51160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:07:34.201443   51160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:07:34.215242   51160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:07:34.358285   51160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:07:34.500491   51160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:07:34.514531   51160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:07:34.535737   51160 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0831 23:07:34.535781   51160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:07:34.535826   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.546695   51160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:07:34.546765   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.557349   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.567594   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.577396   51160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:07:34.587675   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.597956   51160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.609097   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.619495   51160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:07:34.629416   51160 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0831 23:07:34.629474   51160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:07:34.639176   51160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:07:34.778744   51160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:07:35.980148   51160 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.201363499s)
	I0831 23:07:35.980178   51160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:07:35.980227   51160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:07:35.985438   51160 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0831 23:07:35.985458   51160 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0831 23:07:35.985465   51160 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0831 23:07:35.985471   51160 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0831 23:07:35.985476   51160 command_runner.go:130] > Access: 2024-08-31 23:07:35.855299816 +0000
	I0831 23:07:35.985484   51160 command_runner.go:130] > Modify: 2024-08-31 23:07:35.841299522 +0000
	I0831 23:07:35.985497   51160 command_runner.go:130] > Change: 2024-08-31 23:07:35.841299522 +0000
	I0831 23:07:35.985502   51160 command_runner.go:130] >  Birth: -
	I0831 23:07:35.985519   51160 start.go:563] Will wait 60s for crictl version
	I0831 23:07:35.985552   51160 ssh_runner.go:195] Run: which crictl
	I0831 23:07:35.989350   51160 command_runner.go:130] > /usr/bin/crictl
	I0831 23:07:35.989414   51160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:07:36.028153   51160 command_runner.go:130] > Version:  0.1.0
	I0831 23:07:36.028173   51160 command_runner.go:130] > RuntimeName:  cri-o
	I0831 23:07:36.028177   51160 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0831 23:07:36.028183   51160 command_runner.go:130] > RuntimeApiVersion:  v1
	I0831 23:07:36.030902   51160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 23:07:36.030979   51160 ssh_runner.go:195] Run: crio --version
	I0831 23:07:36.059163   51160 command_runner.go:130] > crio version 1.29.1
	I0831 23:07:36.059181   51160 command_runner.go:130] > Version:        1.29.1
	I0831 23:07:36.059188   51160 command_runner.go:130] > GitCommit:      unknown
	I0831 23:07:36.059213   51160 command_runner.go:130] > GitCommitDate:  unknown
	I0831 23:07:36.059220   51160 command_runner.go:130] > GitTreeState:   clean
	I0831 23:07:36.059228   51160 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0831 23:07:36.059234   51160 command_runner.go:130] > GoVersion:      go1.21.6
	I0831 23:07:36.059240   51160 command_runner.go:130] > Compiler:       gc
	I0831 23:07:36.059248   51160 command_runner.go:130] > Platform:       linux/amd64
	I0831 23:07:36.059257   51160 command_runner.go:130] > Linkmode:       dynamic
	I0831 23:07:36.059264   51160 command_runner.go:130] > BuildTags:      
	I0831 23:07:36.059272   51160 command_runner.go:130] >   containers_image_ostree_stub
	I0831 23:07:36.059279   51160 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0831 23:07:36.059286   51160 command_runner.go:130] >   btrfs_noversion
	I0831 23:07:36.059291   51160 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0831 23:07:36.059296   51160 command_runner.go:130] >   libdm_no_deferred_remove
	I0831 23:07:36.059300   51160 command_runner.go:130] >   seccomp
	I0831 23:07:36.059304   51160 command_runner.go:130] > LDFlags:          unknown
	I0831 23:07:36.059308   51160 command_runner.go:130] > SeccompEnabled:   true
	I0831 23:07:36.059312   51160 command_runner.go:130] > AppArmorEnabled:  false
	I0831 23:07:36.059410   51160 ssh_runner.go:195] Run: crio --version
	I0831 23:07:36.089109   51160 command_runner.go:130] > crio version 1.29.1
	I0831 23:07:36.089135   51160 command_runner.go:130] > Version:        1.29.1
	I0831 23:07:36.089144   51160 command_runner.go:130] > GitCommit:      unknown
	I0831 23:07:36.089150   51160 command_runner.go:130] > GitCommitDate:  unknown
	I0831 23:07:36.089156   51160 command_runner.go:130] > GitTreeState:   clean
	I0831 23:07:36.089164   51160 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0831 23:07:36.089170   51160 command_runner.go:130] > GoVersion:      go1.21.6
	I0831 23:07:36.089177   51160 command_runner.go:130] > Compiler:       gc
	I0831 23:07:36.089184   51160 command_runner.go:130] > Platform:       linux/amd64
	I0831 23:07:36.089191   51160 command_runner.go:130] > Linkmode:       dynamic
	I0831 23:07:36.089198   51160 command_runner.go:130] > BuildTags:      
	I0831 23:07:36.089203   51160 command_runner.go:130] >   containers_image_ostree_stub
	I0831 23:07:36.089208   51160 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0831 23:07:36.089213   51160 command_runner.go:130] >   btrfs_noversion
	I0831 23:07:36.089217   51160 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0831 23:07:36.089233   51160 command_runner.go:130] >   libdm_no_deferred_remove
	I0831 23:07:36.089237   51160 command_runner.go:130] >   seccomp
	I0831 23:07:36.089241   51160 command_runner.go:130] > LDFlags:          unknown
	I0831 23:07:36.089246   51160 command_runner.go:130] > SeccompEnabled:   true
	I0831 23:07:36.089253   51160 command_runner.go:130] > AppArmorEnabled:  false
	I0831 23:07:36.092605   51160 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 23:07:36.094483   51160 main.go:141] libmachine: (multinode-328486) Calling .GetIP
	I0831 23:07:36.097076   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:36.097525   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:36.097557   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:36.097806   51160 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 23:07:36.102182   51160 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0831 23:07:36.102268   51160 kubeadm.go:883] updating cluster {Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 23:07:36.102451   51160 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:07:36.102504   51160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:07:36.144241   51160 command_runner.go:130] > {
	I0831 23:07:36.144267   51160 command_runner.go:130] >   "images": [
	I0831 23:07:36.144272   51160 command_runner.go:130] >     {
	I0831 23:07:36.144285   51160 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0831 23:07:36.144292   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144300   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0831 23:07:36.144306   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144312   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144326   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0831 23:07:36.144335   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0831 23:07:36.144341   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144348   51160 command_runner.go:130] >       "size": "87165492",
	I0831 23:07:36.144354   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144360   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144375   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144383   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144390   51160 command_runner.go:130] >     },
	I0831 23:07:36.144395   51160 command_runner.go:130] >     {
	I0831 23:07:36.144404   51160 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0831 23:07:36.144412   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144417   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0831 23:07:36.144421   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144425   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144432   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0831 23:07:36.144439   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0831 23:07:36.144444   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144452   51160 command_runner.go:130] >       "size": "87190579",
	I0831 23:07:36.144458   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144471   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144480   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144487   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144496   51160 command_runner.go:130] >     },
	I0831 23:07:36.144502   51160 command_runner.go:130] >     {
	I0831 23:07:36.144520   51160 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0831 23:07:36.144528   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144534   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0831 23:07:36.144538   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144542   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144551   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0831 23:07:36.144566   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0831 23:07:36.144573   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144580   51160 command_runner.go:130] >       "size": "1363676",
	I0831 23:07:36.144588   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144595   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144604   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144614   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144619   51160 command_runner.go:130] >     },
	I0831 23:07:36.144641   51160 command_runner.go:130] >     {
	I0831 23:07:36.144668   51160 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0831 23:07:36.144678   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144687   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0831 23:07:36.144692   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144698   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144708   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0831 23:07:36.144725   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0831 23:07:36.144733   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144739   51160 command_runner.go:130] >       "size": "31470524",
	I0831 23:07:36.144746   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144753   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144761   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144768   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144777   51160 command_runner.go:130] >     },
	I0831 23:07:36.144782   51160 command_runner.go:130] >     {
	I0831 23:07:36.144795   51160 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0831 23:07:36.144804   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144812   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0831 23:07:36.144820   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144826   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144839   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0831 23:07:36.144852   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0831 23:07:36.144858   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144862   51160 command_runner.go:130] >       "size": "61245718",
	I0831 23:07:36.144868   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144872   51160 command_runner.go:130] >       "username": "nonroot",
	I0831 23:07:36.144877   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144881   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144885   51160 command_runner.go:130] >     },
	I0831 23:07:36.144888   51160 command_runner.go:130] >     {
	I0831 23:07:36.144894   51160 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0831 23:07:36.144900   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144905   51160 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0831 23:07:36.144909   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144913   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144922   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0831 23:07:36.144931   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0831 23:07:36.144936   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144940   51160 command_runner.go:130] >       "size": "149009664",
	I0831 23:07:36.144946   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.144950   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.144953   51160 command_runner.go:130] >       },
	I0831 23:07:36.144959   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144963   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144968   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144971   51160 command_runner.go:130] >     },
	I0831 23:07:36.144975   51160 command_runner.go:130] >     {
	I0831 23:07:36.144982   51160 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0831 23:07:36.144986   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144993   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0831 23:07:36.144998   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145002   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145011   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0831 23:07:36.145020   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0831 23:07:36.145025   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145029   51160 command_runner.go:130] >       "size": "95233506",
	I0831 23:07:36.145035   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145044   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.145049   51160 command_runner.go:130] >       },
	I0831 23:07:36.145053   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145057   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145063   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145067   51160 command_runner.go:130] >     },
	I0831 23:07:36.145072   51160 command_runner.go:130] >     {
	I0831 23:07:36.145078   51160 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0831 23:07:36.145084   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145089   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0831 23:07:36.145095   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145099   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145126   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0831 23:07:36.145136   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0831 23:07:36.145141   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145145   51160 command_runner.go:130] >       "size": "89437512",
	I0831 23:07:36.145151   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145155   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.145161   51160 command_runner.go:130] >       },
	I0831 23:07:36.145165   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145168   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145172   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145175   51160 command_runner.go:130] >     },
	I0831 23:07:36.145179   51160 command_runner.go:130] >     {
	I0831 23:07:36.145186   51160 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0831 23:07:36.145190   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145194   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0831 23:07:36.145198   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145201   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145208   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0831 23:07:36.145214   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0831 23:07:36.145218   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145222   51160 command_runner.go:130] >       "size": "92728217",
	I0831 23:07:36.145225   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.145229   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145234   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145242   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145245   51160 command_runner.go:130] >     },
	I0831 23:07:36.145248   51160 command_runner.go:130] >     {
	I0831 23:07:36.145254   51160 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0831 23:07:36.145257   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145262   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0831 23:07:36.145268   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145272   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145281   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0831 23:07:36.145290   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0831 23:07:36.145295   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145299   51160 command_runner.go:130] >       "size": "68420936",
	I0831 23:07:36.145305   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145309   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.145314   51160 command_runner.go:130] >       },
	I0831 23:07:36.145319   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145324   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145328   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145334   51160 command_runner.go:130] >     },
	I0831 23:07:36.145337   51160 command_runner.go:130] >     {
	I0831 23:07:36.145342   51160 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0831 23:07:36.145348   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145355   51160 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0831 23:07:36.145360   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145369   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145378   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0831 23:07:36.145386   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0831 23:07:36.145392   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145396   51160 command_runner.go:130] >       "size": "742080",
	I0831 23:07:36.145402   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145406   51160 command_runner.go:130] >         "value": "65535"
	I0831 23:07:36.145411   51160 command_runner.go:130] >       },
	I0831 23:07:36.145415   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145421   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145426   51160 command_runner.go:130] >       "pinned": true
	I0831 23:07:36.145431   51160 command_runner.go:130] >     }
	I0831 23:07:36.145439   51160 command_runner.go:130] >   ]
	I0831 23:07:36.145445   51160 command_runner.go:130] > }
	I0831 23:07:36.145639   51160 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:07:36.145663   51160 crio.go:433] Images already preloaded, skipping extraction
	I0831 23:07:36.145717   51160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:07:36.178073   51160 command_runner.go:130] > {
	I0831 23:07:36.178094   51160 command_runner.go:130] >   "images": [
	I0831 23:07:36.178098   51160 command_runner.go:130] >     {
	I0831 23:07:36.178105   51160 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0831 23:07:36.178110   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178119   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0831 23:07:36.178124   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178131   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178145   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0831 23:07:36.178156   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0831 23:07:36.178161   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178166   51160 command_runner.go:130] >       "size": "87165492",
	I0831 23:07:36.178170   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178175   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178187   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178195   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178198   51160 command_runner.go:130] >     },
	I0831 23:07:36.178202   51160 command_runner.go:130] >     {
	I0831 23:07:36.178211   51160 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0831 23:07:36.178218   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178227   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0831 23:07:36.178236   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178243   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178256   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0831 23:07:36.178266   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0831 23:07:36.178272   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178276   51160 command_runner.go:130] >       "size": "87190579",
	I0831 23:07:36.178280   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178293   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178302   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178312   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178321   51160 command_runner.go:130] >     },
	I0831 23:07:36.178328   51160 command_runner.go:130] >     {
	I0831 23:07:36.178344   51160 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0831 23:07:36.178354   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178362   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0831 23:07:36.178365   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178372   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178379   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0831 23:07:36.178393   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0831 23:07:36.178403   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178419   51160 command_runner.go:130] >       "size": "1363676",
	I0831 23:07:36.178428   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178438   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178450   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178460   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178464   51160 command_runner.go:130] >     },
	I0831 23:07:36.178472   51160 command_runner.go:130] >     {
	I0831 23:07:36.178481   51160 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0831 23:07:36.178491   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178502   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0831 23:07:36.178511   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178521   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178536   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0831 23:07:36.178556   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0831 23:07:36.178565   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178574   51160 command_runner.go:130] >       "size": "31470524",
	I0831 23:07:36.178584   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178593   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178602   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178611   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178619   51160 command_runner.go:130] >     },
	I0831 23:07:36.178626   51160 command_runner.go:130] >     {
	I0831 23:07:36.178635   51160 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0831 23:07:36.178643   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178650   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0831 23:07:36.178658   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178665   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178680   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0831 23:07:36.178704   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0831 23:07:36.178712   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178722   51160 command_runner.go:130] >       "size": "61245718",
	I0831 23:07:36.178730   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178738   51160 command_runner.go:130] >       "username": "nonroot",
	I0831 23:07:36.178746   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178756   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178764   51160 command_runner.go:130] >     },
	I0831 23:07:36.178772   51160 command_runner.go:130] >     {
	I0831 23:07:36.178782   51160 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0831 23:07:36.178790   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178801   51160 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0831 23:07:36.178809   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178817   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178825   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0831 23:07:36.178839   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0831 23:07:36.178848   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178858   51160 command_runner.go:130] >       "size": "149009664",
	I0831 23:07:36.178866   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.178874   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.178885   51160 command_runner.go:130] >       },
	I0831 23:07:36.178895   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178903   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178909   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178915   51160 command_runner.go:130] >     },
	I0831 23:07:36.178923   51160 command_runner.go:130] >     {
	I0831 23:07:36.178937   51160 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0831 23:07:36.178946   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178955   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0831 23:07:36.178962   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178969   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178987   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0831 23:07:36.179003   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0831 23:07:36.179011   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179020   51160 command_runner.go:130] >       "size": "95233506",
	I0831 23:07:36.179029   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179046   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.179054   51160 command_runner.go:130] >       },
	I0831 23:07:36.179063   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179073   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179079   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179085   51160 command_runner.go:130] >     },
	I0831 23:07:36.179090   51160 command_runner.go:130] >     {
	I0831 23:07:36.179101   51160 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0831 23:07:36.179111   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179122   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0831 23:07:36.179130   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179137   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179166   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0831 23:07:36.179178   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0831 23:07:36.179186   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179196   51160 command_runner.go:130] >       "size": "89437512",
	I0831 23:07:36.179205   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179214   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.179219   51160 command_runner.go:130] >       },
	I0831 23:07:36.179228   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179237   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179246   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179255   51160 command_runner.go:130] >     },
	I0831 23:07:36.179261   51160 command_runner.go:130] >     {
	I0831 23:07:36.179269   51160 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0831 23:07:36.179279   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179290   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0831 23:07:36.179298   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179306   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179320   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0831 23:07:36.179346   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0831 23:07:36.179355   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179361   51160 command_runner.go:130] >       "size": "92728217",
	I0831 23:07:36.179370   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.179377   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179386   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179403   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179416   51160 command_runner.go:130] >     },
	I0831 23:07:36.179432   51160 command_runner.go:130] >     {
	I0831 23:07:36.179445   51160 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0831 23:07:36.179453   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179464   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0831 23:07:36.179473   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179480   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179491   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0831 23:07:36.179504   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0831 23:07:36.179513   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179520   51160 command_runner.go:130] >       "size": "68420936",
	I0831 23:07:36.179528   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179538   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.179545   51160 command_runner.go:130] >       },
	I0831 23:07:36.179554   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179563   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179571   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179577   51160 command_runner.go:130] >     },
	I0831 23:07:36.179595   51160 command_runner.go:130] >     {
	I0831 23:07:36.179609   51160 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0831 23:07:36.179624   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179639   51160 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0831 23:07:36.179648   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179657   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179667   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0831 23:07:36.179681   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0831 23:07:36.179689   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179699   51160 command_runner.go:130] >       "size": "742080",
	I0831 23:07:36.179707   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179713   51160 command_runner.go:130] >         "value": "65535"
	I0831 23:07:36.179722   51160 command_runner.go:130] >       },
	I0831 23:07:36.179730   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179739   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179747   51160 command_runner.go:130] >       "pinned": true
	I0831 23:07:36.179754   51160 command_runner.go:130] >     }
	I0831 23:07:36.179764   51160 command_runner.go:130] >   ]
	I0831 23:07:36.179771   51160 command_runner.go:130] > }
	I0831 23:07:36.179943   51160 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:07:36.179958   51160 cache_images.go:84] Images are preloaded, skipping loading
	I0831 23:07:36.179967   51160 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0831 23:07:36.180091   51160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-328486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:07:36.180172   51160 ssh_runner.go:195] Run: crio config
	I0831 23:07:36.212371   51160 command_runner.go:130] ! time="2024-08-31 23:07:36.189798002Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0831 23:07:36.218707   51160 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0831 23:07:36.226861   51160 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0831 23:07:36.226881   51160 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0831 23:07:36.226888   51160 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0831 23:07:36.226891   51160 command_runner.go:130] > #
	I0831 23:07:36.226900   51160 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0831 23:07:36.226907   51160 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0831 23:07:36.226913   51160 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0831 23:07:36.226920   51160 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0831 23:07:36.226924   51160 command_runner.go:130] > # reload'.
	I0831 23:07:36.226930   51160 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0831 23:07:36.226936   51160 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0831 23:07:36.226942   51160 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0831 23:07:36.226947   51160 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0831 23:07:36.226953   51160 command_runner.go:130] > [crio]
	I0831 23:07:36.226959   51160 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0831 23:07:36.226965   51160 command_runner.go:130] > # containers images, in this directory.
	I0831 23:07:36.226969   51160 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0831 23:07:36.226980   51160 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0831 23:07:36.226988   51160 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0831 23:07:36.226995   51160 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0831 23:07:36.226998   51160 command_runner.go:130] > # imagestore = ""
	I0831 23:07:36.227004   51160 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0831 23:07:36.227013   51160 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0831 23:07:36.227017   51160 command_runner.go:130] > storage_driver = "overlay"
	I0831 23:07:36.227023   51160 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0831 23:07:36.227035   51160 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0831 23:07:36.227044   51160 command_runner.go:130] > storage_option = [
	I0831 23:07:36.227051   51160 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0831 23:07:36.227054   51160 command_runner.go:130] > ]
	I0831 23:07:36.227061   51160 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0831 23:07:36.227069   51160 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0831 23:07:36.227079   51160 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0831 23:07:36.227091   51160 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0831 23:07:36.227098   51160 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0831 23:07:36.227103   51160 command_runner.go:130] > # always happen on a node reboot
	I0831 23:07:36.227108   51160 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0831 23:07:36.227121   51160 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0831 23:07:36.227128   51160 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0831 23:07:36.227136   51160 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0831 23:07:36.227141   51160 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0831 23:07:36.227150   51160 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0831 23:07:36.227160   51160 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0831 23:07:36.227166   51160 command_runner.go:130] > # internal_wipe = true
	I0831 23:07:36.227174   51160 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0831 23:07:36.227192   51160 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0831 23:07:36.227198   51160 command_runner.go:130] > # internal_repair = false
	I0831 23:07:36.227203   51160 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0831 23:07:36.227211   51160 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0831 23:07:36.227217   51160 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0831 23:07:36.227223   51160 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0831 23:07:36.227229   51160 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0831 23:07:36.227235   51160 command_runner.go:130] > [crio.api]
	I0831 23:07:36.227240   51160 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0831 23:07:36.227247   51160 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0831 23:07:36.227252   51160 command_runner.go:130] > # IP address on which the stream server will listen.
	I0831 23:07:36.227258   51160 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0831 23:07:36.227265   51160 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0831 23:07:36.227272   51160 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0831 23:07:36.227276   51160 command_runner.go:130] > # stream_port = "0"
	I0831 23:07:36.227283   51160 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0831 23:07:36.227287   51160 command_runner.go:130] > # stream_enable_tls = false
	I0831 23:07:36.227299   51160 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0831 23:07:36.227306   51160 command_runner.go:130] > # stream_idle_timeout = ""
	I0831 23:07:36.227314   51160 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0831 23:07:36.227333   51160 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0831 23:07:36.227341   51160 command_runner.go:130] > # minutes.
	I0831 23:07:36.227347   51160 command_runner.go:130] > # stream_tls_cert = ""
	I0831 23:07:36.227355   51160 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0831 23:07:36.227368   51160 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0831 23:07:36.227376   51160 command_runner.go:130] > # stream_tls_key = ""
	I0831 23:07:36.227384   51160 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0831 23:07:36.227391   51160 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0831 23:07:36.227411   51160 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0831 23:07:36.227417   51160 command_runner.go:130] > # stream_tls_ca = ""
	I0831 23:07:36.227424   51160 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0831 23:07:36.227429   51160 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0831 23:07:36.227435   51160 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0831 23:07:36.227442   51160 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0831 23:07:36.227450   51160 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0831 23:07:36.227458   51160 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0831 23:07:36.227461   51160 command_runner.go:130] > [crio.runtime]
	I0831 23:07:36.227470   51160 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0831 23:07:36.227477   51160 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0831 23:07:36.227481   51160 command_runner.go:130] > # "nofile=1024:2048"
	I0831 23:07:36.227489   51160 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0831 23:07:36.227494   51160 command_runner.go:130] > # default_ulimits = [
	I0831 23:07:36.227497   51160 command_runner.go:130] > # ]
	I0831 23:07:36.227503   51160 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0831 23:07:36.227509   51160 command_runner.go:130] > # no_pivot = false
	I0831 23:07:36.227514   51160 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0831 23:07:36.227522   51160 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0831 23:07:36.227529   51160 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0831 23:07:36.227534   51160 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0831 23:07:36.227542   51160 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0831 23:07:36.227548   51160 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0831 23:07:36.227555   51160 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0831 23:07:36.227559   51160 command_runner.go:130] > # Cgroup setting for conmon
	I0831 23:07:36.227572   51160 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0831 23:07:36.227579   51160 command_runner.go:130] > conmon_cgroup = "pod"
	I0831 23:07:36.227584   51160 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0831 23:07:36.227592   51160 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0831 23:07:36.227600   51160 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0831 23:07:36.227605   51160 command_runner.go:130] > conmon_env = [
	I0831 23:07:36.227611   51160 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0831 23:07:36.227616   51160 command_runner.go:130] > ]
	I0831 23:07:36.227623   51160 command_runner.go:130] > # Additional environment variables to set for all the
	I0831 23:07:36.227633   51160 command_runner.go:130] > # containers. These are overridden if set in the
	I0831 23:07:36.227644   51160 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0831 23:07:36.227652   51160 command_runner.go:130] > # default_env = [
	I0831 23:07:36.227658   51160 command_runner.go:130] > # ]
	I0831 23:07:36.227669   51160 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0831 23:07:36.227683   51160 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0831 23:07:36.227693   51160 command_runner.go:130] > # selinux = false
	I0831 23:07:36.227702   51160 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0831 23:07:36.227714   51160 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0831 23:07:36.227726   51160 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0831 23:07:36.227734   51160 command_runner.go:130] > # seccomp_profile = ""
	I0831 23:07:36.227740   51160 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0831 23:07:36.227747   51160 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0831 23:07:36.227753   51160 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0831 23:07:36.227759   51160 command_runner.go:130] > # which might increase security.
	I0831 23:07:36.227764   51160 command_runner.go:130] > # This option is currently deprecated,
	I0831 23:07:36.227769   51160 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0831 23:07:36.227776   51160 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0831 23:07:36.227782   51160 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0831 23:07:36.227789   51160 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0831 23:07:36.227796   51160 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0831 23:07:36.227803   51160 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0831 23:07:36.227809   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.227815   51160 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0831 23:07:36.227821   51160 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0831 23:07:36.227827   51160 command_runner.go:130] > # the cgroup blockio controller.
	I0831 23:07:36.227831   51160 command_runner.go:130] > # blockio_config_file = ""
	I0831 23:07:36.227847   51160 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0831 23:07:36.227853   51160 command_runner.go:130] > # blockio parameters.
	I0831 23:07:36.227856   51160 command_runner.go:130] > # blockio_reload = false
	I0831 23:07:36.227863   51160 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0831 23:07:36.227869   51160 command_runner.go:130] > # irqbalance daemon.
	I0831 23:07:36.227874   51160 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0831 23:07:36.227883   51160 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0831 23:07:36.227893   51160 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0831 23:07:36.227902   51160 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0831 23:07:36.227909   51160 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0831 23:07:36.227918   51160 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0831 23:07:36.227923   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.227929   51160 command_runner.go:130] > # rdt_config_file = ""
	I0831 23:07:36.227933   51160 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0831 23:07:36.227940   51160 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0831 23:07:36.227969   51160 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0831 23:07:36.227975   51160 command_runner.go:130] > # separate_pull_cgroup = ""
	I0831 23:07:36.227981   51160 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0831 23:07:36.227987   51160 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0831 23:07:36.227990   51160 command_runner.go:130] > # will be added.
	I0831 23:07:36.227997   51160 command_runner.go:130] > # default_capabilities = [
	I0831 23:07:36.228000   51160 command_runner.go:130] > # 	"CHOWN",
	I0831 23:07:36.228006   51160 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0831 23:07:36.228010   51160 command_runner.go:130] > # 	"FSETID",
	I0831 23:07:36.228016   51160 command_runner.go:130] > # 	"FOWNER",
	I0831 23:07:36.228020   51160 command_runner.go:130] > # 	"SETGID",
	I0831 23:07:36.228025   51160 command_runner.go:130] > # 	"SETUID",
	I0831 23:07:36.228029   51160 command_runner.go:130] > # 	"SETPCAP",
	I0831 23:07:36.228032   51160 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0831 23:07:36.228038   51160 command_runner.go:130] > # 	"KILL",
	I0831 23:07:36.228041   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228050   51160 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0831 23:07:36.228058   51160 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0831 23:07:36.228064   51160 command_runner.go:130] > # add_inheritable_capabilities = false
	I0831 23:07:36.228071   51160 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0831 23:07:36.228078   51160 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0831 23:07:36.228095   51160 command_runner.go:130] > default_sysctls = [
	I0831 23:07:36.228102   51160 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0831 23:07:36.228106   51160 command_runner.go:130] > ]
	I0831 23:07:36.228111   51160 command_runner.go:130] > # List of devices on the host that a
	I0831 23:07:36.228117   51160 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0831 23:07:36.228123   51160 command_runner.go:130] > # allowed_devices = [
	I0831 23:07:36.228127   51160 command_runner.go:130] > # 	"/dev/fuse",
	I0831 23:07:36.228132   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228136   51160 command_runner.go:130] > # List of additional devices. specified as
	I0831 23:07:36.228145   51160 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0831 23:07:36.228156   51160 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0831 23:07:36.228166   51160 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0831 23:07:36.228172   51160 command_runner.go:130] > # additional_devices = [
	I0831 23:07:36.228175   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228182   51160 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0831 23:07:36.228186   51160 command_runner.go:130] > # cdi_spec_dirs = [
	I0831 23:07:36.228192   51160 command_runner.go:130] > # 	"/etc/cdi",
	I0831 23:07:36.228195   51160 command_runner.go:130] > # 	"/var/run/cdi",
	I0831 23:07:36.228201   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228207   51160 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0831 23:07:36.228214   51160 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0831 23:07:36.228221   51160 command_runner.go:130] > # Defaults to false.
	I0831 23:07:36.228226   51160 command_runner.go:130] > # device_ownership_from_security_context = false
	I0831 23:07:36.228234   51160 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0831 23:07:36.228242   51160 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0831 23:07:36.228248   51160 command_runner.go:130] > # hooks_dir = [
	I0831 23:07:36.228253   51160 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0831 23:07:36.228258   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228263   51160 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0831 23:07:36.228271   51160 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0831 23:07:36.228279   51160 command_runner.go:130] > # its default mounts from the following two files:
	I0831 23:07:36.228284   51160 command_runner.go:130] > #
	I0831 23:07:36.228292   51160 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0831 23:07:36.228298   51160 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0831 23:07:36.228305   51160 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0831 23:07:36.228309   51160 command_runner.go:130] > #
	I0831 23:07:36.228323   51160 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0831 23:07:36.228331   51160 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0831 23:07:36.228341   51160 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0831 23:07:36.228348   51160 command_runner.go:130] > #      only add mounts it finds in this file.
	I0831 23:07:36.228351   51160 command_runner.go:130] > #
	I0831 23:07:36.228355   51160 command_runner.go:130] > # default_mounts_file = ""
	I0831 23:07:36.228362   51160 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0831 23:07:36.228368   51160 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0831 23:07:36.228374   51160 command_runner.go:130] > pids_limit = 1024
	I0831 23:07:36.228380   51160 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0831 23:07:36.228387   51160 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0831 23:07:36.228393   51160 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0831 23:07:36.228402   51160 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0831 23:07:36.228408   51160 command_runner.go:130] > # log_size_max = -1
	I0831 23:07:36.228415   51160 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0831 23:07:36.228424   51160 command_runner.go:130] > # log_to_journald = false
	I0831 23:07:36.228432   51160 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0831 23:07:36.228439   51160 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0831 23:07:36.228444   51160 command_runner.go:130] > # Path to directory for container attach sockets.
	I0831 23:07:36.228451   51160 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0831 23:07:36.228456   51160 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0831 23:07:36.228462   51160 command_runner.go:130] > # bind_mount_prefix = ""
	I0831 23:07:36.228467   51160 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0831 23:07:36.228473   51160 command_runner.go:130] > # read_only = false
	I0831 23:07:36.228479   51160 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0831 23:07:36.228487   51160 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0831 23:07:36.228493   51160 command_runner.go:130] > # live configuration reload.
	I0831 23:07:36.228497   51160 command_runner.go:130] > # log_level = "info"
	I0831 23:07:36.228504   51160 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0831 23:07:36.228511   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.228515   51160 command_runner.go:130] > # log_filter = ""
	I0831 23:07:36.228523   51160 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0831 23:07:36.228533   51160 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0831 23:07:36.228539   51160 command_runner.go:130] > # separated by comma.
	I0831 23:07:36.228546   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228552   51160 command_runner.go:130] > # uid_mappings = ""
	I0831 23:07:36.228562   51160 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0831 23:07:36.228571   51160 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0831 23:07:36.228576   51160 command_runner.go:130] > # separated by comma.
	I0831 23:07:36.228584   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228590   51160 command_runner.go:130] > # gid_mappings = ""
	I0831 23:07:36.228596   51160 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0831 23:07:36.228604   51160 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0831 23:07:36.228610   51160 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0831 23:07:36.228619   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228629   51160 command_runner.go:130] > # minimum_mappable_uid = -1
	I0831 23:07:36.228645   51160 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0831 23:07:36.228657   51160 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0831 23:07:36.228669   51160 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0831 23:07:36.228682   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228691   51160 command_runner.go:130] > # minimum_mappable_gid = -1
	I0831 23:07:36.228703   51160 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0831 23:07:36.228715   51160 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0831 23:07:36.228723   51160 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0831 23:07:36.228729   51160 command_runner.go:130] > # ctr_stop_timeout = 30
	I0831 23:07:36.228735   51160 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0831 23:07:36.228742   51160 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0831 23:07:36.228749   51160 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0831 23:07:36.228754   51160 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0831 23:07:36.228760   51160 command_runner.go:130] > drop_infra_ctr = false
	I0831 23:07:36.228766   51160 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0831 23:07:36.228771   51160 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0831 23:07:36.228780   51160 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0831 23:07:36.228786   51160 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0831 23:07:36.228793   51160 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0831 23:07:36.228801   51160 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0831 23:07:36.228807   51160 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0831 23:07:36.228814   51160 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0831 23:07:36.228818   51160 command_runner.go:130] > # shared_cpuset = ""
	I0831 23:07:36.228826   51160 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0831 23:07:36.228832   51160 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0831 23:07:36.228837   51160 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0831 23:07:36.228850   51160 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0831 23:07:36.228856   51160 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0831 23:07:36.228861   51160 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0831 23:07:36.228869   51160 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0831 23:07:36.228874   51160 command_runner.go:130] > # enable_criu_support = false
	I0831 23:07:36.228879   51160 command_runner.go:130] > # Enable/disable the generation of the container,
	I0831 23:07:36.228894   51160 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0831 23:07:36.228901   51160 command_runner.go:130] > # enable_pod_events = false
	I0831 23:07:36.228907   51160 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0831 23:07:36.228915   51160 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0831 23:07:36.228920   51160 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0831 23:07:36.228926   51160 command_runner.go:130] > # default_runtime = "runc"
	I0831 23:07:36.228931   51160 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0831 23:07:36.228940   51160 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0831 23:07:36.228955   51160 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0831 23:07:36.228968   51160 command_runner.go:130] > # creation as a file is not desired either.
	I0831 23:07:36.228983   51160 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0831 23:07:36.228994   51160 command_runner.go:130] > # the hostname is being managed dynamically.
	I0831 23:07:36.229005   51160 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0831 23:07:36.229022   51160 command_runner.go:130] > # ]
	I0831 23:07:36.229034   51160 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0831 23:07:36.229047   51160 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0831 23:07:36.229058   51160 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0831 23:07:36.229065   51160 command_runner.go:130] > # Each entry in the table should follow the format:
	I0831 23:07:36.229069   51160 command_runner.go:130] > #
	I0831 23:07:36.229074   51160 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0831 23:07:36.229082   51160 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0831 23:07:36.229129   51160 command_runner.go:130] > # runtime_type = "oci"
	I0831 23:07:36.229136   51160 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0831 23:07:36.229141   51160 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0831 23:07:36.229145   51160 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0831 23:07:36.229148   51160 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0831 23:07:36.229152   51160 command_runner.go:130] > # monitor_env = []
	I0831 23:07:36.229158   51160 command_runner.go:130] > # privileged_without_host_devices = false
	I0831 23:07:36.229162   51160 command_runner.go:130] > # allowed_annotations = []
	I0831 23:07:36.229169   51160 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0831 23:07:36.229179   51160 command_runner.go:130] > # Where:
	I0831 23:07:36.229187   51160 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0831 23:07:36.229193   51160 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0831 23:07:36.229201   51160 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0831 23:07:36.229209   51160 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0831 23:07:36.229214   51160 command_runner.go:130] > #   in $PATH.
	I0831 23:07:36.229220   51160 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0831 23:07:36.229226   51160 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0831 23:07:36.229232   51160 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0831 23:07:36.229238   51160 command_runner.go:130] > #   state.
	I0831 23:07:36.229245   51160 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0831 23:07:36.229253   51160 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0831 23:07:36.229260   51160 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0831 23:07:36.229267   51160 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0831 23:07:36.229273   51160 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0831 23:07:36.229281   51160 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0831 23:07:36.229289   51160 command_runner.go:130] > #   The currently recognized values are:
	I0831 23:07:36.229298   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0831 23:07:36.229307   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0831 23:07:36.229314   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0831 23:07:36.229321   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0831 23:07:36.229330   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0831 23:07:36.229339   51160 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0831 23:07:36.229348   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0831 23:07:36.229354   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0831 23:07:36.229361   51160 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0831 23:07:36.229367   51160 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0831 23:07:36.229373   51160 command_runner.go:130] > #   deprecated option "conmon".
	I0831 23:07:36.229382   51160 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0831 23:07:36.229389   51160 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0831 23:07:36.229395   51160 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0831 23:07:36.229402   51160 command_runner.go:130] > #   should be moved to the container's cgroup
	I0831 23:07:36.229408   51160 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0831 23:07:36.229415   51160 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0831 23:07:36.229421   51160 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0831 23:07:36.229428   51160 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0831 23:07:36.229438   51160 command_runner.go:130] > #
	I0831 23:07:36.229445   51160 command_runner.go:130] > # Using the seccomp notifier feature:
	I0831 23:07:36.229449   51160 command_runner.go:130] > #
	I0831 23:07:36.229454   51160 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0831 23:07:36.229462   51160 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0831 23:07:36.229467   51160 command_runner.go:130] > #
	I0831 23:07:36.229473   51160 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0831 23:07:36.229481   51160 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0831 23:07:36.229485   51160 command_runner.go:130] > #
	I0831 23:07:36.229490   51160 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0831 23:07:36.229496   51160 command_runner.go:130] > # feature.
	I0831 23:07:36.229499   51160 command_runner.go:130] > #
	I0831 23:07:36.229505   51160 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0831 23:07:36.229513   51160 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0831 23:07:36.229521   51160 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0831 23:07:36.229529   51160 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0831 23:07:36.229537   51160 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0831 23:07:36.229542   51160 command_runner.go:130] > #
	I0831 23:07:36.229548   51160 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0831 23:07:36.229556   51160 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0831 23:07:36.229561   51160 command_runner.go:130] > #
	I0831 23:07:36.229566   51160 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0831 23:07:36.229573   51160 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0831 23:07:36.229576   51160 command_runner.go:130] > #
	I0831 23:07:36.229582   51160 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0831 23:07:36.229590   51160 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0831 23:07:36.229593   51160 command_runner.go:130] > # limitation.
	I0831 23:07:36.229599   51160 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0831 23:07:36.229604   51160 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0831 23:07:36.229608   51160 command_runner.go:130] > runtime_type = "oci"
	I0831 23:07:36.229614   51160 command_runner.go:130] > runtime_root = "/run/runc"
	I0831 23:07:36.229618   51160 command_runner.go:130] > runtime_config_path = ""
	I0831 23:07:36.229625   51160 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0831 23:07:36.229632   51160 command_runner.go:130] > monitor_cgroup = "pod"
	I0831 23:07:36.229641   51160 command_runner.go:130] > monitor_exec_cgroup = ""
	I0831 23:07:36.229647   51160 command_runner.go:130] > monitor_env = [
	I0831 23:07:36.229665   51160 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0831 23:07:36.229672   51160 command_runner.go:130] > ]
	I0831 23:07:36.229680   51160 command_runner.go:130] > privileged_without_host_devices = false
	I0831 23:07:36.229692   51160 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0831 23:07:36.229703   51160 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0831 23:07:36.229715   51160 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0831 23:07:36.229726   51160 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0831 23:07:36.229737   51160 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0831 23:07:36.229742   51160 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0831 23:07:36.229752   51160 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0831 23:07:36.229761   51160 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0831 23:07:36.229767   51160 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0831 23:07:36.229774   51160 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0831 23:07:36.229777   51160 command_runner.go:130] > # Example:
	I0831 23:07:36.229781   51160 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0831 23:07:36.229785   51160 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0831 23:07:36.229795   51160 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0831 23:07:36.229799   51160 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0831 23:07:36.229802   51160 command_runner.go:130] > # cpuset = 0
	I0831 23:07:36.229806   51160 command_runner.go:130] > # cpushares = "0-1"
	I0831 23:07:36.229809   51160 command_runner.go:130] > # Where:
	I0831 23:07:36.229813   51160 command_runner.go:130] > # The workload name is workload-type.
	I0831 23:07:36.229820   51160 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0831 23:07:36.229824   51160 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0831 23:07:36.229829   51160 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0831 23:07:36.229837   51160 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0831 23:07:36.229842   51160 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0831 23:07:36.229847   51160 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0831 23:07:36.229852   51160 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0831 23:07:36.229857   51160 command_runner.go:130] > # Default value is set to true
	I0831 23:07:36.229861   51160 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0831 23:07:36.229866   51160 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0831 23:07:36.229870   51160 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0831 23:07:36.229874   51160 command_runner.go:130] > # Default value is set to 'false'
	I0831 23:07:36.229878   51160 command_runner.go:130] > # disable_hostport_mapping = false
	I0831 23:07:36.229883   51160 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0831 23:07:36.229891   51160 command_runner.go:130] > #
	I0831 23:07:36.229896   51160 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0831 23:07:36.229901   51160 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0831 23:07:36.229909   51160 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0831 23:07:36.229914   51160 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0831 23:07:36.229919   51160 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0831 23:07:36.229923   51160 command_runner.go:130] > [crio.image]
	I0831 23:07:36.229928   51160 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0831 23:07:36.229932   51160 command_runner.go:130] > # default_transport = "docker://"
	I0831 23:07:36.229938   51160 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0831 23:07:36.229943   51160 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0831 23:07:36.229948   51160 command_runner.go:130] > # global_auth_file = ""
	I0831 23:07:36.229953   51160 command_runner.go:130] > # The image used to instantiate infra containers.
	I0831 23:07:36.229957   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.229961   51160 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0831 23:07:36.229967   51160 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0831 23:07:36.229972   51160 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0831 23:07:36.229976   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.229984   51160 command_runner.go:130] > # pause_image_auth_file = ""
	I0831 23:07:36.229991   51160 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0831 23:07:36.229997   51160 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0831 23:07:36.230004   51160 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0831 23:07:36.230010   51160 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0831 23:07:36.230016   51160 command_runner.go:130] > # pause_command = "/pause"
	I0831 23:07:36.230021   51160 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0831 23:07:36.230028   51160 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0831 23:07:36.230034   51160 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0831 23:07:36.230042   51160 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0831 23:07:36.230050   51160 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0831 23:07:36.230056   51160 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0831 23:07:36.230061   51160 command_runner.go:130] > # pinned_images = [
	I0831 23:07:36.230065   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230070   51160 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0831 23:07:36.230079   51160 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0831 23:07:36.230093   51160 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0831 23:07:36.230105   51160 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0831 23:07:36.230121   51160 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0831 23:07:36.230131   51160 command_runner.go:130] > # signature_policy = ""
	I0831 23:07:36.230138   51160 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0831 23:07:36.230149   51160 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0831 23:07:36.230160   51160 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0831 23:07:36.230170   51160 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0831 23:07:36.230180   51160 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0831 23:07:36.230186   51160 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0831 23:07:36.230194   51160 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0831 23:07:36.230205   51160 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0831 23:07:36.230210   51160 command_runner.go:130] > # changing them here.
	I0831 23:07:36.230218   51160 command_runner.go:130] > # insecure_registries = [
	I0831 23:07:36.230224   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230234   51160 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0831 23:07:36.230244   51160 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0831 23:07:36.230250   51160 command_runner.go:130] > # image_volumes = "mkdir"
	I0831 23:07:36.230260   51160 command_runner.go:130] > # Temporary directory to use for storing big files
	I0831 23:07:36.230267   51160 command_runner.go:130] > # big_files_temporary_dir = ""
	I0831 23:07:36.230282   51160 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0831 23:07:36.230292   51160 command_runner.go:130] > # CNI plugins.
	I0831 23:07:36.230297   51160 command_runner.go:130] > [crio.network]
	I0831 23:07:36.230306   51160 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0831 23:07:36.230314   51160 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0831 23:07:36.230318   51160 command_runner.go:130] > # cni_default_network = ""
	I0831 23:07:36.230326   51160 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0831 23:07:36.230330   51160 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0831 23:07:36.230338   51160 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0831 23:07:36.230343   51160 command_runner.go:130] > # plugin_dirs = [
	I0831 23:07:36.230347   51160 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0831 23:07:36.230350   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230355   51160 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0831 23:07:36.230362   51160 command_runner.go:130] > [crio.metrics]
	I0831 23:07:36.230366   51160 command_runner.go:130] > # Globally enable or disable metrics support.
	I0831 23:07:36.230371   51160 command_runner.go:130] > enable_metrics = true
	I0831 23:07:36.230375   51160 command_runner.go:130] > # Specify enabled metrics collectors.
	I0831 23:07:36.230380   51160 command_runner.go:130] > # Per default all metrics are enabled.
	I0831 23:07:36.230391   51160 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0831 23:07:36.230400   51160 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0831 23:07:36.230405   51160 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0831 23:07:36.230411   51160 command_runner.go:130] > # metrics_collectors = [
	I0831 23:07:36.230415   51160 command_runner.go:130] > # 	"operations",
	I0831 23:07:36.230423   51160 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0831 23:07:36.230430   51160 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0831 23:07:36.230434   51160 command_runner.go:130] > # 	"operations_errors",
	I0831 23:07:36.230440   51160 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0831 23:07:36.230444   51160 command_runner.go:130] > # 	"image_pulls_by_name",
	I0831 23:07:36.230450   51160 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0831 23:07:36.230454   51160 command_runner.go:130] > # 	"image_pulls_failures",
	I0831 23:07:36.230458   51160 command_runner.go:130] > # 	"image_pulls_successes",
	I0831 23:07:36.230464   51160 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0831 23:07:36.230468   51160 command_runner.go:130] > # 	"image_layer_reuse",
	I0831 23:07:36.230475   51160 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0831 23:07:36.230479   51160 command_runner.go:130] > # 	"containers_oom_total",
	I0831 23:07:36.230484   51160 command_runner.go:130] > # 	"containers_oom",
	I0831 23:07:36.230488   51160 command_runner.go:130] > # 	"processes_defunct",
	I0831 23:07:36.230494   51160 command_runner.go:130] > # 	"operations_total",
	I0831 23:07:36.230499   51160 command_runner.go:130] > # 	"operations_latency_seconds",
	I0831 23:07:36.230505   51160 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0831 23:07:36.230509   51160 command_runner.go:130] > # 	"operations_errors_total",
	I0831 23:07:36.230514   51160 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0831 23:07:36.230519   51160 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0831 23:07:36.230525   51160 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0831 23:07:36.230529   51160 command_runner.go:130] > # 	"image_pulls_success_total",
	I0831 23:07:36.230537   51160 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0831 23:07:36.230542   51160 command_runner.go:130] > # 	"containers_oom_count_total",
	I0831 23:07:36.230548   51160 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0831 23:07:36.230553   51160 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0831 23:07:36.230558   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230563   51160 command_runner.go:130] > # The port on which the metrics server will listen.
	I0831 23:07:36.230569   51160 command_runner.go:130] > # metrics_port = 9090
	I0831 23:07:36.230574   51160 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0831 23:07:36.230580   51160 command_runner.go:130] > # metrics_socket = ""
	I0831 23:07:36.230591   51160 command_runner.go:130] > # The certificate for the secure metrics server.
	I0831 23:07:36.230599   51160 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0831 23:07:36.230607   51160 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0831 23:07:36.230614   51160 command_runner.go:130] > # certificate on any modification event.
	I0831 23:07:36.230618   51160 command_runner.go:130] > # metrics_cert = ""
	I0831 23:07:36.230628   51160 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0831 23:07:36.230644   51160 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0831 23:07:36.230653   51160 command_runner.go:130] > # metrics_key = ""
	I0831 23:07:36.230661   51160 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0831 23:07:36.230670   51160 command_runner.go:130] > [crio.tracing]
	I0831 23:07:36.230678   51160 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0831 23:07:36.230687   51160 command_runner.go:130] > # enable_tracing = false
	I0831 23:07:36.230695   51160 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0831 23:07:36.230703   51160 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0831 23:07:36.230714   51160 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0831 23:07:36.230724   51160 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0831 23:07:36.230731   51160 command_runner.go:130] > # CRI-O NRI configuration.
	I0831 23:07:36.230738   51160 command_runner.go:130] > [crio.nri]
	I0831 23:07:36.230743   51160 command_runner.go:130] > # Globally enable or disable NRI.
	I0831 23:07:36.230752   51160 command_runner.go:130] > # enable_nri = false
	I0831 23:07:36.230759   51160 command_runner.go:130] > # NRI socket to listen on.
	I0831 23:07:36.230764   51160 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0831 23:07:36.230770   51160 command_runner.go:130] > # NRI plugin directory to use.
	I0831 23:07:36.230774   51160 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0831 23:07:36.230779   51160 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0831 23:07:36.230784   51160 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0831 23:07:36.230792   51160 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0831 23:07:36.230796   51160 command_runner.go:130] > # nri_disable_connections = false
	I0831 23:07:36.230806   51160 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0831 23:07:36.230812   51160 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0831 23:07:36.230817   51160 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0831 23:07:36.230823   51160 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0831 23:07:36.230829   51160 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0831 23:07:36.230835   51160 command_runner.go:130] > [crio.stats]
	I0831 23:07:36.230844   51160 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0831 23:07:36.230851   51160 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0831 23:07:36.230860   51160 command_runner.go:130] > # stats_collection_period = 0
	I0831 23:07:36.231059   51160 cni.go:84] Creating CNI manager for ""
	I0831 23:07:36.231075   51160 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0831 23:07:36.231095   51160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 23:07:36.231117   51160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-328486 NodeName:multinode-328486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 23:07:36.231250   51160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-328486"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 23:07:36.231311   51160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:07:36.241716   51160 command_runner.go:130] > kubeadm
	I0831 23:07:36.241736   51160 command_runner.go:130] > kubectl
	I0831 23:07:36.241741   51160 command_runner.go:130] > kubelet
	I0831 23:07:36.241759   51160 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:07:36.241811   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 23:07:36.251054   51160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0831 23:07:36.268147   51160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:07:36.284351   51160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0831 23:07:36.301455   51160 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0831 23:07:36.305425   51160 command_runner.go:130] > 192.168.39.107	control-plane.minikube.internal
	I0831 23:07:36.305511   51160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:07:36.442373   51160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:07:36.457168   51160 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486 for IP: 192.168.39.107
	I0831 23:07:36.457193   51160 certs.go:194] generating shared ca certs ...
	I0831 23:07:36.457213   51160 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:07:36.457363   51160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 23:07:36.457415   51160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 23:07:36.457429   51160 certs.go:256] generating profile certs ...
	I0831 23:07:36.457513   51160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/client.key
	I0831 23:07:36.457587   51160 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.key.ee1e7169
	I0831 23:07:36.457640   51160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.key
	I0831 23:07:36.457655   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:07:36.457674   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:07:36.457692   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:07:36.457711   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:07:36.457729   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 23:07:36.457749   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 23:07:36.457768   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 23:07:36.457786   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 23:07:36.457863   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 23:07:36.457904   51160 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 23:07:36.457918   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 23:07:36.457952   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:07:36.457984   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:07:36.458016   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 23:07:36.458068   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:07:36.458125   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.458146   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.458165   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.458741   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:07:36.483460   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:07:36.507473   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:07:36.531115   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:07:36.554215   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0831 23:07:36.577936   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:07:36.601696   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:07:36.625372   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:07:36.649389   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 23:07:36.672751   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 23:07:36.697764   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:07:36.722951   51160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 23:07:36.740566   51160 ssh_runner.go:195] Run: openssl version
	I0831 23:07:36.746507   51160 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0831 23:07:36.746591   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 23:07:36.757642   51160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.762348   51160 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.762380   51160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.762418   51160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.768154   51160 command_runner.go:130] > 51391683
	I0831 23:07:36.768296   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 23:07:36.777745   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 23:07:36.788550   51160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.793014   51160 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.793042   51160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.793074   51160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.798824   51160 command_runner.go:130] > 3ec20f2e
	I0831 23:07:36.798870   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:07:36.807855   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:07:36.818102   51160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.822584   51160 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.822605   51160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.822639   51160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.827953   51160 command_runner.go:130] > b5213941
	I0831 23:07:36.828064   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:07:36.837404   51160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:07:36.841856   51160 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:07:36.841881   51160 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0831 23:07:36.841889   51160 command_runner.go:130] > Device: 253,1	Inode: 2103318     Links: 1
	I0831 23:07:36.841897   51160 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0831 23:07:36.841907   51160 command_runner.go:130] > Access: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841918   51160 command_runner.go:130] > Modify: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841926   51160 command_runner.go:130] > Change: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841934   51160 command_runner.go:130] >  Birth: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841991   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:07:36.847679   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.847740   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:07:36.853358   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.853412   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:07:36.858976   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.859136   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:07:36.864658   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.864822   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:07:36.870609   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.870686   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:07:36.876438   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.876505   51160 kubeadm.go:392] StartCluster: {Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:07:36.876609   51160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 23:07:36.876664   51160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 23:07:36.916517   51160 command_runner.go:130] > 1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391
	I0831 23:07:36.916546   51160 command_runner.go:130] > 59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b
	I0831 23:07:36.916557   51160 command_runner.go:130] > b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999
	I0831 23:07:36.916564   51160 command_runner.go:130] > a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060
	I0831 23:07:36.916569   51160 command_runner.go:130] > 4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90
	I0831 23:07:36.916576   51160 command_runner.go:130] > 980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd
	I0831 23:07:36.916581   51160 command_runner.go:130] > 4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1
	I0831 23:07:36.916598   51160 command_runner.go:130] > 4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5
	I0831 23:07:36.916617   51160 cri.go:89] found id: "1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391"
	I0831 23:07:36.916624   51160 cri.go:89] found id: "59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b"
	I0831 23:07:36.916627   51160 cri.go:89] found id: "b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999"
	I0831 23:07:36.916630   51160 cri.go:89] found id: "a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060"
	I0831 23:07:36.916633   51160 cri.go:89] found id: "4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90"
	I0831 23:07:36.916637   51160 cri.go:89] found id: "980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd"
	I0831 23:07:36.916639   51160 cri.go:89] found id: "4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1"
	I0831 23:07:36.916642   51160 cri.go:89] found id: "4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5"
	I0831 23:07:36.916645   51160 cri.go:89] found id: ""
	I0831 23:07:36.916685   51160 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.778491087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9066daf-0a4e-43de-ba69-7ad1f089c00f name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.778848708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4b9117718867762d4ec1613ed7322c4abfa72005cb92d8018b922282a80d85,PodSandboxId:6354e5a895ed6d06456ec5b16d6c824cc23bd897c9907aaaa43a4d334272654c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725145697525584542,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342,PodSandboxId:e2197d9bd51fe0d63d4cf8c7d95b6bb41789d6b6ddea7eb358cf6448fb27cdbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725145664094961658,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b,PodSandboxId:f53b33402e0d6fe1c895ae9b722f85501dac8408a5b2b0f12d69f790d0179922,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725145664071926088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766cdf89d49cc3ce1d99f4804e91e565591ad577dd431c646112797f22fb0273,PodSandboxId:890e773dbbc410c322dc4a348e1d9da6a851372cd12a334f86770588e560c82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725145663882100507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a,PodSandboxId:f4b8369314621c963ce45db20de7698fbffc84d546a2d63e9909759e07f64af6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725145663823747300,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604,PodSandboxId:8ba69a9980cb0a5009471a4c2b5b1bf64b9c7de6caa2af0de4d2756c6c5e179f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725145659022601896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97,PodSandboxId:adc69baecb81c23207a5d36a86e07db3152d2a3378967f5a97f5b99f749d0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725145658950613576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2,PodSandboxId:5a7b0aa2a1c6c40fe4c2485c32c0b4a7a39d5729b7b40d809195a111e186ebdc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725145658969911445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5,PodSandboxId:bfc9ffecf4faf522c4d40e1c84e573c9175eb61f6ab76e1274333722a90e9709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725145658865941512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea00636a8abd5658da3a89d2e52a578b618823bd0951c06274aee040f9fbc93,PodSandboxId:211ac5f7cdc1e037699bd87c354f6495083806abd09c529554b01ec871df2ff2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725145334349135237,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391,PodSandboxId:baeb7772fb632d5aa1b822df12d0f7e75ede183a7f4fff14145e0adb86b98348,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725145276679568324,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b,PodSandboxId:0d9ed579185612e622f9270e1911fd5d4c4bea5592416bb7e67870dadabf59cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725145276616262398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999,PodSandboxId:53bc0437f5619a15de3776b9590136135893e16b37d3f950bc35c853e61cb4c2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725145264916205977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060,PodSandboxId:a194e9b69b97299473ef3967cdc39580a099f2a07408d8f91035e907cd75998e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725145261144659263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90,PodSandboxId:70a70b890a48e99f30786b2676e6887ad512fc969979cf899136fb90216d16cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725145250476776281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
d8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd,PodSandboxId:27f9441cd679a6d36b2eff027de3386b40f6bf1132a69a617b8d315c3b0b21e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725145250428368437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1,PodSandboxId:0ec8ad0e93908d9e917cab57362321f9e326c62ac8362baa80ac19c8b67869b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725145250373951807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5,PodSandboxId:17b67a34f259d067a9766198f97d0dd67f2b8ac190f5267941dca8f4b5910780,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725145250333252755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9066daf-0a4e-43de-ba69-7ad1f089c00f name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.823522125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27d487f9-18b3-4111-98ec-bc5295633ce2 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.823630690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27d487f9-18b3-4111-98ec-bc5295633ce2 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.823526780Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=1f2eff15-6153-44aa-a953-2ac822d0cebb name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.823727090Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f2eff15-6153-44aa-a953-2ac822d0cebb name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.825689685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0970f09f-1264-4dec-aed8-a37abc61cff1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.826114526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145764826089933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0970f09f-1264-4dec-aed8-a37abc61cff1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.826822624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43802564-b84d-4d11-bc53-92b1830ac713 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.826880322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43802564-b84d-4d11-bc53-92b1830ac713 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.827483912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4b9117718867762d4ec1613ed7322c4abfa72005cb92d8018b922282a80d85,PodSandboxId:6354e5a895ed6d06456ec5b16d6c824cc23bd897c9907aaaa43a4d334272654c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725145697525584542,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342,PodSandboxId:e2197d9bd51fe0d63d4cf8c7d95b6bb41789d6b6ddea7eb358cf6448fb27cdbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725145664094961658,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b,PodSandboxId:f53b33402e0d6fe1c895ae9b722f85501dac8408a5b2b0f12d69f790d0179922,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725145664071926088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766cdf89d49cc3ce1d99f4804e91e565591ad577dd431c646112797f22fb0273,PodSandboxId:890e773dbbc410c322dc4a348e1d9da6a851372cd12a334f86770588e560c82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725145663882100507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a,PodSandboxId:f4b8369314621c963ce45db20de7698fbffc84d546a2d63e9909759e07f64af6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725145663823747300,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604,PodSandboxId:8ba69a9980cb0a5009471a4c2b5b1bf64b9c7de6caa2af0de4d2756c6c5e179f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725145659022601896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97,PodSandboxId:adc69baecb81c23207a5d36a86e07db3152d2a3378967f5a97f5b99f749d0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725145658950613576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2,PodSandboxId:5a7b0aa2a1c6c40fe4c2485c32c0b4a7a39d5729b7b40d809195a111e186ebdc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725145658969911445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5,PodSandboxId:bfc9ffecf4faf522c4d40e1c84e573c9175eb61f6ab76e1274333722a90e9709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725145658865941512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea00636a8abd5658da3a89d2e52a578b618823bd0951c06274aee040f9fbc93,PodSandboxId:211ac5f7cdc1e037699bd87c354f6495083806abd09c529554b01ec871df2ff2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725145334349135237,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391,PodSandboxId:baeb7772fb632d5aa1b822df12d0f7e75ede183a7f4fff14145e0adb86b98348,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725145276679568324,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b,PodSandboxId:0d9ed579185612e622f9270e1911fd5d4c4bea5592416bb7e67870dadabf59cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725145276616262398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999,PodSandboxId:53bc0437f5619a15de3776b9590136135893e16b37d3f950bc35c853e61cb4c2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725145264916205977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060,PodSandboxId:a194e9b69b97299473ef3967cdc39580a099f2a07408d8f91035e907cd75998e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725145261144659263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90,PodSandboxId:70a70b890a48e99f30786b2676e6887ad512fc969979cf899136fb90216d16cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725145250476776281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
d8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd,PodSandboxId:27f9441cd679a6d36b2eff027de3386b40f6bf1132a69a617b8d315c3b0b21e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725145250428368437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1,PodSandboxId:0ec8ad0e93908d9e917cab57362321f9e326c62ac8362baa80ac19c8b67869b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725145250373951807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5,PodSandboxId:17b67a34f259d067a9766198f97d0dd67f2b8ac190f5267941dca8f4b5910780,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725145250333252755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43802564-b84d-4d11-bc53-92b1830ac713 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.874580443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98decd0c-b2eb-49fc-898b-3ee736da6612 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.874658094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98decd0c-b2eb-49fc-898b-3ee736da6612 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.876037340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ca08529-2511-499a-ad97-bb56d5d69502 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.876545559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145764876522124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ca08529-2511-499a-ad97-bb56d5d69502 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.877281740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3452f86-55d3-4ae3-ba13-bde7b9cf172d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.877412005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3452f86-55d3-4ae3-ba13-bde7b9cf172d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.877767133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4b9117718867762d4ec1613ed7322c4abfa72005cb92d8018b922282a80d85,PodSandboxId:6354e5a895ed6d06456ec5b16d6c824cc23bd897c9907aaaa43a4d334272654c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725145697525584542,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342,PodSandboxId:e2197d9bd51fe0d63d4cf8c7d95b6bb41789d6b6ddea7eb358cf6448fb27cdbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725145664094961658,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b,PodSandboxId:f53b33402e0d6fe1c895ae9b722f85501dac8408a5b2b0f12d69f790d0179922,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725145664071926088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766cdf89d49cc3ce1d99f4804e91e565591ad577dd431c646112797f22fb0273,PodSandboxId:890e773dbbc410c322dc4a348e1d9da6a851372cd12a334f86770588e560c82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725145663882100507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a,PodSandboxId:f4b8369314621c963ce45db20de7698fbffc84d546a2d63e9909759e07f64af6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725145663823747300,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604,PodSandboxId:8ba69a9980cb0a5009471a4c2b5b1bf64b9c7de6caa2af0de4d2756c6c5e179f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725145659022601896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97,PodSandboxId:adc69baecb81c23207a5d36a86e07db3152d2a3378967f5a97f5b99f749d0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725145658950613576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2,PodSandboxId:5a7b0aa2a1c6c40fe4c2485c32c0b4a7a39d5729b7b40d809195a111e186ebdc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725145658969911445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5,PodSandboxId:bfc9ffecf4faf522c4d40e1c84e573c9175eb61f6ab76e1274333722a90e9709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725145658865941512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea00636a8abd5658da3a89d2e52a578b618823bd0951c06274aee040f9fbc93,PodSandboxId:211ac5f7cdc1e037699bd87c354f6495083806abd09c529554b01ec871df2ff2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725145334349135237,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391,PodSandboxId:baeb7772fb632d5aa1b822df12d0f7e75ede183a7f4fff14145e0adb86b98348,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725145276679568324,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b,PodSandboxId:0d9ed579185612e622f9270e1911fd5d4c4bea5592416bb7e67870dadabf59cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725145276616262398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999,PodSandboxId:53bc0437f5619a15de3776b9590136135893e16b37d3f950bc35c853e61cb4c2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725145264916205977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060,PodSandboxId:a194e9b69b97299473ef3967cdc39580a099f2a07408d8f91035e907cd75998e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725145261144659263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90,PodSandboxId:70a70b890a48e99f30786b2676e6887ad512fc969979cf899136fb90216d16cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725145250476776281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
d8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd,PodSandboxId:27f9441cd679a6d36b2eff027de3386b40f6bf1132a69a617b8d315c3b0b21e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725145250428368437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1,PodSandboxId:0ec8ad0e93908d9e917cab57362321f9e326c62ac8362baa80ac19c8b67869b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725145250373951807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5,PodSandboxId:17b67a34f259d067a9766198f97d0dd67f2b8ac190f5267941dca8f4b5910780,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725145250333252755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3452f86-55d3-4ae3-ba13-bde7b9cf172d name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.926613762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6fdeaa2-9cb5-42f5-8fb7-853380ff332c name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.926687638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6fdeaa2-9cb5-42f5-8fb7-853380ff332c name=/runtime.v1.RuntimeService/Version
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.927915568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82973fcc-7c59-4129-8721-91535eee2892 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.928353242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145764928330248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82973fcc-7c59-4129-8721-91535eee2892 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.929128553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c559bfb8-527a-4504-98f6-8b84fd34b2e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.929202819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c559bfb8-527a-4504-98f6-8b84fd34b2e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:09:24 multinode-328486 crio[2728]: time="2024-08-31 23:09:24.929907906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4b9117718867762d4ec1613ed7322c4abfa72005cb92d8018b922282a80d85,PodSandboxId:6354e5a895ed6d06456ec5b16d6c824cc23bd897c9907aaaa43a4d334272654c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725145697525584542,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342,PodSandboxId:e2197d9bd51fe0d63d4cf8c7d95b6bb41789d6b6ddea7eb358cf6448fb27cdbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725145664094961658,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b,PodSandboxId:f53b33402e0d6fe1c895ae9b722f85501dac8408a5b2b0f12d69f790d0179922,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725145664071926088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766cdf89d49cc3ce1d99f4804e91e565591ad577dd431c646112797f22fb0273,PodSandboxId:890e773dbbc410c322dc4a348e1d9da6a851372cd12a334f86770588e560c82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725145663882100507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a,PodSandboxId:f4b8369314621c963ce45db20de7698fbffc84d546a2d63e9909759e07f64af6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725145663823747300,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604,PodSandboxId:8ba69a9980cb0a5009471a4c2b5b1bf64b9c7de6caa2af0de4d2756c6c5e179f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725145659022601896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97,PodSandboxId:adc69baecb81c23207a5d36a86e07db3152d2a3378967f5a97f5b99f749d0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725145658950613576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2,PodSandboxId:5a7b0aa2a1c6c40fe4c2485c32c0b4a7a39d5729b7b40d809195a111e186ebdc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725145658969911445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5,PodSandboxId:bfc9ffecf4faf522c4d40e1c84e573c9175eb61f6ab76e1274333722a90e9709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725145658865941512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea00636a8abd5658da3a89d2e52a578b618823bd0951c06274aee040f9fbc93,PodSandboxId:211ac5f7cdc1e037699bd87c354f6495083806abd09c529554b01ec871df2ff2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725145334349135237,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391,PodSandboxId:baeb7772fb632d5aa1b822df12d0f7e75ede183a7f4fff14145e0adb86b98348,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725145276679568324,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b,PodSandboxId:0d9ed579185612e622f9270e1911fd5d4c4bea5592416bb7e67870dadabf59cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725145276616262398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999,PodSandboxId:53bc0437f5619a15de3776b9590136135893e16b37d3f950bc35c853e61cb4c2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725145264916205977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060,PodSandboxId:a194e9b69b97299473ef3967cdc39580a099f2a07408d8f91035e907cd75998e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725145261144659263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90,PodSandboxId:70a70b890a48e99f30786b2676e6887ad512fc969979cf899136fb90216d16cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725145250476776281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
d8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd,PodSandboxId:27f9441cd679a6d36b2eff027de3386b40f6bf1132a69a617b8d315c3b0b21e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725145250428368437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1,PodSandboxId:0ec8ad0e93908d9e917cab57362321f9e326c62ac8362baa80ac19c8b67869b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725145250373951807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5,PodSandboxId:17b67a34f259d067a9766198f97d0dd67f2b8ac190f5267941dca8f4b5910780,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725145250333252755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c559bfb8-527a-4504-98f6-8b84fd34b2e4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9e4b911771886       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6354e5a895ed6       busybox-7dff88458-d8fm4
	be5242d84e14b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   e2197d9bd51fe       kindnet-db4rl
	4e4eec6d5cd86       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   f53b33402e0d6       coredns-6f6b679f8f-qc6xv
	766cdf89d49cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   890e773dbbc41       storage-provisioner
	af182014be0d2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   f4b8369314621       kube-proxy-d26wn
	3457ed73a33a3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   8ba69a9980cb0       etcd-multinode-328486
	355c302ecd1e4       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   5a7b0aa2a1c6c       kube-apiserver-multinode-328486
	292216e837e65       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   adc69baecb81c       kube-scheduler-multinode-328486
	86c26f347c5db       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   bfc9ffecf4faf       kube-controller-manager-multinode-328486
	5ea00636a8abd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   211ac5f7cdc1e       busybox-7dff88458-d8fm4
	1854f60b239ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   baeb7772fb632       coredns-6f6b679f8f-qc6xv
	59e76f99b3b4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   0d9ed57918561       storage-provisioner
	b02f023f5ad89       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   53bc0437f5619       kindnet-db4rl
	a30878aa9b46b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   a194e9b69b972       kube-proxy-d26wn
	4ba23ea858780       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   70a70b890a48e       kube-scheduler-multinode-328486
	980e8b26efbbf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   27f9441cd679a       etcd-multinode-328486
	4761d2795a972       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   0ec8ad0e93908       kube-apiserver-multinode-328486
	4812c3914931d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   17b67a34f259d       kube-controller-manager-multinode-328486
	
	
	==> coredns [1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391] <==
	[INFO] 10.244.1.2:42525 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001745588s
	[INFO] 10.244.1.2:35977 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138438s
	[INFO] 10.244.1.2:42908 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104633s
	[INFO] 10.244.1.2:47781 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001204934s
	[INFO] 10.244.1.2:54698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065189s
	[INFO] 10.244.1.2:46331 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069999s
	[INFO] 10.244.1.2:39274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074391s
	[INFO] 10.244.0.3:55327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225689s
	[INFO] 10.244.0.3:57039 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048526s
	[INFO] 10.244.0.3:44547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045341s
	[INFO] 10.244.0.3:58409 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036559s
	[INFO] 10.244.1.2:38226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120244s
	[INFO] 10.244.1.2:43580 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124347s
	[INFO] 10.244.1.2:50032 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085135s
	[INFO] 10.244.1.2:57073 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077512s
	[INFO] 10.244.0.3:49426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107976s
	[INFO] 10.244.0.3:50100 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101425s
	[INFO] 10.244.0.3:57202 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009954s
	[INFO] 10.244.0.3:52483 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155515s
	[INFO] 10.244.1.2:56872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128138s
	[INFO] 10.244.1.2:60473 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105079s
	[INFO] 10.244.1.2:36850 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113804s
	[INFO] 10.244.1.2:50146 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083122s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52559 - 46687 "HINFO IN 8178519279946959702.1478944245455869154. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014545178s
	
	
	==> describe nodes <==
	Name:               multinode-328486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=multinode-328486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T23_00_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 23:00:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-328486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:09:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:00:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:00:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:00:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-328486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b85ddfecfee143998777e9211191b0e8
	  System UUID:                b85ddfec-fee1-4399-8777-e9211191b0e8
	  Boot ID:                    80cd42b0-9834-46da-9c3b-79f201f788b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d8fm4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 coredns-6f6b679f8f-qc6xv                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m25s
	  kube-system                 etcd-multinode-328486                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m30s
	  kube-system                 kindnet-db4rl                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m25s
	  kube-system                 kube-apiserver-multinode-328486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-controller-manager-multinode-328486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-proxy-d26wn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-scheduler-multinode-328486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m23s                kube-proxy       
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m30s                kubelet          Node multinode-328486 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s                kubelet          Node multinode-328486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s                kubelet          Node multinode-328486 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m30s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m26s                node-controller  Node multinode-328486 event: Registered Node multinode-328486 in Controller
	  Normal  NodeReady                8m9s                 kubelet          Node multinode-328486 status is now: NodeReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node multinode-328486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node multinode-328486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)  kubelet          Node multinode-328486 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-328486 event: Registered Node multinode-328486 in Controller
	
	
	Name:               multinode-328486-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328486-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=multinode-328486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T23_08_24_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 23:08:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-328486-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:09:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:08:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:08:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:08:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:08:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    multinode-328486-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b9b162ba7134f7f94c043db2101b498
	  System UUID:                6b9b162b-a713-4f7f-94c0-43db2101b498
	  Boot ID:                    952cd52a-e7a5-40bc-997c-ebc4b1a4d144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t729k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-zh78t              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m38s
	  kube-system                 kube-proxy-qp4jf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m32s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m38s (x2 over 7m38s)  kubelet     Node multinode-328486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x2 over 7m38s)  kubelet     Node multinode-328486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s (x2 over 7m38s)  kubelet     Node multinode-328486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m17s                  kubelet     Node multinode-328486-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-328486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-328486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-328486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-328486-m02 status is now: NodeReady
	
	
	Name:               multinode-328486-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328486-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=multinode-328486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T23_09_03_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 23:09:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-328486-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:09:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:09:22 +0000   Sat, 31 Aug 2024 23:09:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:09:22 +0000   Sat, 31 Aug 2024 23:09:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:09:22 +0000   Sat, 31 Aug 2024 23:09:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:09:22 +0000   Sat, 31 Aug 2024 23:09:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    multinode-328486-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7768eae809be4de1a8d43dfbfd3da693
	  System UUID:                7768eae8-09be-4de1-a8d4-3dfbfd3da693
	  Boot ID:                    ab15a1db-0bd1-4013-ac0e-90980305657f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rvzrt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m37s
	  kube-system                 kube-proxy-4phsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m32s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m43s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m37s (x2 over 6m38s)  kubelet     Node multinode-328486-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x2 over 6m38s)  kubelet     Node multinode-328486-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x2 over 6m38s)  kubelet     Node multinode-328486-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m18s                  kubelet     Node multinode-328486-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet     Node multinode-328486-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet     Node multinode-328486-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet     Node multinode-328486-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m28s                  kubelet     Node multinode-328486-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-328486-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-328486-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-328486-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-328486-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057459] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.171044] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.148491] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.276772] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.977559] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.413488] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.066783] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989732] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.075167] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.160953] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.132683] kauditd_printk_skb: 21 callbacks suppressed
	[Aug31 23:01] kauditd_printk_skb: 56 callbacks suppressed
	[Aug31 23:02] kauditd_printk_skb: 14 callbacks suppressed
	[Aug31 23:07] systemd-fstab-generator[2652]: Ignoring "noauto" option for root device
	[  +0.153386] systemd-fstab-generator[2665]: Ignoring "noauto" option for root device
	[  +0.162678] systemd-fstab-generator[2679]: Ignoring "noauto" option for root device
	[  +0.151813] systemd-fstab-generator[2691]: Ignoring "noauto" option for root device
	[  +0.275954] systemd-fstab-generator[2719]: Ignoring "noauto" option for root device
	[  +1.661512] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +1.644108] systemd-fstab-generator[2933]: Ignoring "noauto" option for root device
	[  +1.058343] kauditd_printk_skb: 169 callbacks suppressed
	[  +5.139641] kauditd_printk_skb: 35 callbacks suppressed
	[ +14.872782] systemd-fstab-generator[3778]: Ignoring "noauto" option for root device
	[  +0.100182] kauditd_printk_skb: 4 callbacks suppressed
	[Aug31 23:08] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604] <==
	{"level":"info","ts":"2024-08-31T23:07:39.422800Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:07:39.422881Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:07:39.449031Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:07:39.462196Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-31T23:07:39.464759Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T23:07:39.464513Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:07:39.467847Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:07:39.465749Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T23:07:41.270007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-31T23:07:41.270065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-31T23:07:41.270120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-08-31T23:07:41.270135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.270141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.270161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.270173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.276723Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:multinode-328486 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T23:07:41.276772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:07:41.276960Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T23:07:41.277008Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T23:07:41.277043Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:07:41.278074Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:07:41.278075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:07:41.278983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2024-08-31T23:07:41.279318Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T23:08:27.788733Z","caller":"traceutil/trace.go:171","msg":"trace[871104458] transaction","detail":"{read_only:false; response_revision:1043; number_of_response:1; }","duration":"121.159836ms","start":"2024-08-31T23:08:27.667547Z","end":"2024-08-31T23:08:27.788707Z","steps":["trace[871104458] 'process raft request'  (duration: 121.034646ms)"],"step_count":1}
	
	
	==> etcd [980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd] <==
	{"level":"warn","ts":"2024-08-31T23:02:48.323434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.438424ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3701556105952621491 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-vvmtx\" mod_revision:584 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-31T23:02:48.323981Z","caller":"traceutil/trace.go:171","msg":"trace[1170768638] linearizableReadLoop","detail":"{readStateIndex:618; appliedIndex:617; }","duration":"505.298928ms","start":"2024-08-31T23:02:47.818660Z","end":"2024-08-31T23:02:48.323959Z","steps":["trace[1170768638] 'read index received'  (duration: 303.875598ms)","trace[1170768638] 'applied index is now lower than readState.Index'  (duration: 201.422232ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T23:02:48.324159Z","caller":"traceutil/trace.go:171","msg":"trace[2056374227] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"508.792732ms","start":"2024-08-31T23:02:47.815356Z","end":"2024-08-31T23:02:48.324148Z","steps":["trace[2056374227] 'process raft request'  (duration: 307.237928ms)","trace[2056374227] 'compare'  (duration: 200.309855ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-31T23:02:48.324263Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:02:47.815307Z","time spent":"508.911592ms","remote":"127.0.0.1:56144","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1322,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-vvmtx\" mod_revision:584 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" > >"}
	{"level":"warn","ts":"2024-08-31T23:02:48.324499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"505.83308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.324550Z","caller":"traceutil/trace.go:171","msg":"trace[1572202024] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:585; }","duration":"505.885129ms","start":"2024-08-31T23:02:47.818656Z","end":"2024-08-31T23:02:48.324541Z","steps":["trace[1572202024] 'agreement among raft nodes before linearized reading'  (duration: 505.812142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:02:48.324595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:02:47.818625Z","time spent":"505.963476ms","remote":"127.0.0.1:56144","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":28,"request content":"key:\"/registry/certificatesigningrequests\" limit:1 "}
	{"level":"warn","ts":"2024-08-31T23:02:48.324714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"468.516104ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.324749Z","caller":"traceutil/trace.go:171","msg":"trace[1770615465] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:585; }","duration":"468.55095ms","start":"2024-08-31T23:02:47.856192Z","end":"2024-08-31T23:02:48.324743Z","steps":["trace[1770615465] 'agreement among raft nodes before linearized reading'  (duration: 468.507859ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:02:48.325187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.707646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.325278Z","caller":"traceutil/trace.go:171","msg":"trace[635815810] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:585; }","duration":"160.800526ms","start":"2024-08-31T23:02:48.164470Z","end":"2024-08-31T23:02:48.325271Z","steps":["trace[635815810] 'agreement among raft nodes before linearized reading'  (duration: 160.694228ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:02:48.636336Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.790298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-328486-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.636506Z","caller":"traceutil/trace.go:171","msg":"trace[449095893] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-328486-m03; range_end:; response_count:0; response_revision:586; }","duration":"206.974516ms","start":"2024-08-31T23:02:48.429520Z","end":"2024-08-31T23:02:48.636494Z","steps":["trace[449095893] 'range keys from in-memory index tree'  (duration: 206.737452ms)"],"step_count":1}
	2024/08/31 23:02:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-31T23:03:41.718977Z","caller":"traceutil/trace.go:171","msg":"trace[1014906409] transaction","detail":"{read_only:false; response_revision:710; number_of_response:1; }","duration":"109.758374ms","start":"2024-08-31T23:03:41.609199Z","end":"2024-08-31T23:03:41.718957Z","steps":["trace[1014906409] 'process raft request'  (duration: 109.635287ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T23:06:02.653747Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-31T23:06:02.653909Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-328486","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	{"level":"warn","ts":"2024-08-31T23:06:02.654028Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T23:06:02.654119Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T23:06:02.743452Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T23:06:02.743508Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-31T23:06:02.743577Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ec1614c5c0f7335e","current-leader-member-id":"ec1614c5c0f7335e"}
	{"level":"info","ts":"2024-08-31T23:06:02.746362Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:06:02.746501Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:06:02.746509Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-328486","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	
	
	==> kernel <==
	 23:09:25 up 9 min,  0 users,  load average: 0.36, 0.36, 0.18
	Linux multinode-328486 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999] <==
	I0831 23:05:16.016356       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:26.015598       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:26.015727       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:26.015875       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:26.015910       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:05:26.015996       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:26.016018       1 main.go:299] handling current node
	I0831 23:05:36.015664       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:36.015732       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:36.015941       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:36.015948       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:05:36.016002       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:36.016038       1 main.go:299] handling current node
	I0831 23:05:46.016714       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:46.016777       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:46.016942       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:46.016967       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:05:46.017029       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:46.017051       1 main.go:299] handling current node
	I0831 23:05:56.016329       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:56.016529       1 main.go:299] handling current node
	I0831 23:05:56.016572       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:56.016592       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:56.016729       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:56.016769       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342] <==
	I0831 23:08:45.116863       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:08:55.115890       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:08:55.115942       1 main.go:299] handling current node
	I0831 23:08:55.115962       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:08:55.115970       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:08:55.116125       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:08:55.116161       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:09:05.117318       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:09:05.117356       1 main.go:299] handling current node
	I0831 23:09:05.117436       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:09:05.117443       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:09:05.117801       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:09:05.117888       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.2.0/24] 
	I0831 23:09:15.116291       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:09:15.116434       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.2.0/24] 
	I0831 23:09:15.116622       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:09:15.117071       1 main.go:299] handling current node
	I0831 23:09:15.117267       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:09:15.117348       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:09:25.117751       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:09:25.117778       1 main.go:299] handling current node
	I0831 23:09:25.117792       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:09:25.117796       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:09:25.117973       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:09:25.117981       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2] <==
	I0831 23:07:42.613076       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 23:07:42.613439       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 23:07:42.617578       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 23:07:42.625033       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 23:07:42.625146       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 23:07:42.625183       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 23:07:42.625518       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 23:07:42.625550       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 23:07:42.625629       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 23:07:42.625674       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 23:07:42.629597       1 aggregator.go:171] initial CRD sync complete...
	I0831 23:07:42.629640       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 23:07:42.629647       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 23:07:42.629653       1 cache.go:39] Caches are synced for autoregister controller
	I0831 23:07:42.650811       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:07:42.650860       1 policy_source.go:224] refreshing policies
	I0831 23:07:42.729909       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:07:43.517837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0831 23:07:44.880101       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0831 23:07:45.007163       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0831 23:07:45.026885       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0831 23:07:45.095942       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0831 23:07:45.107527       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0831 23:07:46.214691       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 23:07:46.264883       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1] <==
	W0831 23:06:02.673329       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.675858       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.675946       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.675998       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676036       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676207       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676276       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676324       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676370       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676489       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676518       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676575       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676613       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676642       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676708       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676767       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676816       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676848       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676875       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676921       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676949       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676996       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.677044       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.677085       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.677154       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5] <==
	I0831 23:03:35.925860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:35.926200       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:03:37.470229       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-328486-m03\" does not exist"
	I0831 23:03:37.470522       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:03:37.482811       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-328486-m03" podCIDRs=["10.244.3.0/24"]
	I0831 23:03:37.482862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:37.482886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:37.485720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:37.936938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:38.263935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:39.907859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:47.823343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:57.277224       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:03:57.277242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:57.290007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:59.908549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:39.929786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:39.929914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:04:39.946944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:44.963643       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:04:44.994170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:04:45.018281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.633606ms"
	I0831 23:04:45.020165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.846µs"
	I0831 23:04:45.033481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:55.108050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	
	
	==> kube-controller-manager [86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5] <==
	I0831 23:08:43.154337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:08:43.165111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:08:43.172832       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.166µs"
	I0831 23:08:43.186366       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.034µs"
	I0831 23:08:46.018285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:08:46.887157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.138112ms"
	I0831 23:08:46.887858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.425µs"
	I0831 23:08:54.036808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:09:01.007879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:01.032683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:01.247810       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:01.248633       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:09:02.335127       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:09:02.336184       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-328486-m03\" does not exist"
	I0831 23:09:02.346923       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-328486-m03" podCIDRs=["10.244.2.0/24"]
	I0831 23:09:02.348010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:02.348193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:02.355361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:02.756131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:03.090899       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:06.105185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:12.534980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:22.096786       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:09:22.097060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:22.114101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	
	
	==> kube-proxy [a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 23:01:01.691265       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 23:01:01.704580       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0831 23:01:01.704775       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 23:01:01.788894       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 23:01:01.788940       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 23:01:01.788967       1 server_linux.go:169] "Using iptables Proxier"
	I0831 23:01:01.791574       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 23:01:01.791854       1 server.go:483] "Version info" version="v1.31.0"
	I0831 23:01:01.791885       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:01:01.793744       1 config.go:197] "Starting service config controller"
	I0831 23:01:01.793770       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 23:01:01.793794       1 config.go:104] "Starting endpoint slice config controller"
	I0831 23:01:01.793798       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 23:01:01.794159       1 config.go:326] "Starting node config controller"
	I0831 23:01:01.794197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 23:01:01.894875       1 shared_informer.go:320] Caches are synced for node config
	I0831 23:01:01.894926       1 shared_informer.go:320] Caches are synced for service config
	I0831 23:01:01.894966       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 23:07:44.216666       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 23:07:44.231933       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0831 23:07:44.232011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 23:07:44.303942       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 23:07:44.303988       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 23:07:44.304018       1 server_linux.go:169] "Using iptables Proxier"
	I0831 23:07:44.321062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 23:07:44.321313       1 server.go:483] "Version info" version="v1.31.0"
	I0831 23:07:44.321323       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:07:44.323228       1 config.go:197] "Starting service config controller"
	I0831 23:07:44.323323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 23:07:44.323592       1 config.go:326] "Starting node config controller"
	I0831 23:07:44.323620       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 23:07:44.323885       1 config.go:104] "Starting endpoint slice config controller"
	I0831 23:07:44.323916       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 23:07:44.424464       1 shared_informer.go:320] Caches are synced for node config
	I0831 23:07:44.424516       1 shared_informer.go:320] Caches are synced for service config
	I0831 23:07:44.425518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97] <==
	I0831 23:07:40.188290       1 serving.go:386] Generated self-signed cert in-memory
	W0831 23:07:42.602871       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0831 23:07:42.602914       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 23:07:42.602924       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0831 23:07:42.602932       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0831 23:07:42.643737       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0831 23:07:42.643789       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:07:42.646133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0831 23:07:42.646205       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0831 23:07:42.646230       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0831 23:07:42.646342       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 23:07:42.746978       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90] <==
	E0831 23:00:53.289947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.289980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:53.290036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:53.290110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290258       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 23:00:53.291205       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 23:00:53.290523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 23:00:53.291340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 23:00:53.291490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 23:00:53.291548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 23:00:53.291619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:53.291671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.291001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 23:00:53.291732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:54.119609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:54.119860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:54.164268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 23:00:54.164672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 23:00:54.878647       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0831 23:06:02.647125       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 31 23:07:48 multinode-328486 kubelet[2940]: E0831 23:07:48.304076    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145668303742196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:07:48 multinode-328486 kubelet[2940]: E0831 23:07:48.304116    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145668303742196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:07:58 multinode-328486 kubelet[2940]: E0831 23:07:58.307208    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145678306954115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:07:58 multinode-328486 kubelet[2940]: E0831 23:07:58.307354    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145678306954115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:08 multinode-328486 kubelet[2940]: E0831 23:08:08.311665    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145688310993666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:08 multinode-328486 kubelet[2940]: E0831 23:08:08.311739    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145688310993666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:18 multinode-328486 kubelet[2940]: E0831 23:08:18.313761    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145698313281413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:18 multinode-328486 kubelet[2940]: E0831 23:08:18.314463    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145698313281413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:28 multinode-328486 kubelet[2940]: E0831 23:08:28.317494    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145708316823084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:28 multinode-328486 kubelet[2940]: E0831 23:08:28.317545    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145708316823084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:38 multinode-328486 kubelet[2940]: E0831 23:08:38.286108    2940 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 23:08:38 multinode-328486 kubelet[2940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 23:08:38 multinode-328486 kubelet[2940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 23:08:38 multinode-328486 kubelet[2940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 23:08:38 multinode-328486 kubelet[2940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 23:08:38 multinode-328486 kubelet[2940]: E0831 23:08:38.320361    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145718319848611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:38 multinode-328486 kubelet[2940]: E0831 23:08:38.320445    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145718319848611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:48 multinode-328486 kubelet[2940]: E0831 23:08:48.322991    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145728322357021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:48 multinode-328486 kubelet[2940]: E0831 23:08:48.329726    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145728322357021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:58 multinode-328486 kubelet[2940]: E0831 23:08:58.337207    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145738336961248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:08:58 multinode-328486 kubelet[2940]: E0831 23:08:58.337253    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145738336961248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:09:08 multinode-328486 kubelet[2940]: E0831 23:09:08.340088    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145748338833822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:09:08 multinode-328486 kubelet[2940]: E0831 23:09:08.340169    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145748338833822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:09:18 multinode-328486 kubelet[2940]: E0831 23:09:18.341758    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145758341344103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:09:18 multinode-328486 kubelet[2940]: E0831 23:09:18.341782    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145758341344103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:09:24.501274   52319 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18943-13149/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-328486 -n multinode-328486
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-328486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:286: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 stop
E0831 23:09:59.875468   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328486 stop: exit status 82 (2m0.46535665s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-328486-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-328486 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status
E0831 23:11:42.547470   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328486 status: exit status 3 (18.836817658s)

                                                
                                                
-- stdout --
	multinode-328486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328486-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:11:47.683627   52990 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0831 23:11:47.683659   52990 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-328486 status" : exit status 3
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-328486 -n multinode-328486
helpers_test.go:245: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-328486 logs -n 25: (1.493086003s)
helpers_test.go:253: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486:/home/docker/cp-test_multinode-328486-m02_multinode-328486.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486 sudo cat                                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m02_multinode-328486.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03:/home/docker/cp-test_multinode-328486-m02_multinode-328486-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486-m03 sudo cat                                   | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m02_multinode-328486-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp testdata/cp-test.txt                                                | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1488925976/001/cp-test_multinode-328486-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486:/home/docker/cp-test_multinode-328486-m03_multinode-328486.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486 sudo cat                                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m03_multinode-328486.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt                       | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m02:/home/docker/cp-test_multinode-328486-m03_multinode-328486-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n                                                                 | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | multinode-328486-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328486 ssh -n multinode-328486-m02 sudo cat                                   | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | /home/docker/cp-test_multinode-328486-m03_multinode-328486-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-328486 node stop m03                                                          | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	| node    | multinode-328486 node start                                                             | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-328486                                                                | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC |                     |
	| stop    | -p multinode-328486                                                                     | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC |                     |
	| start   | -p multinode-328486                                                                     | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:06 UTC | 31 Aug 24 23:09 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-328486                                                                | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:09 UTC |                     |
	| node    | multinode-328486 node delete                                                            | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:09 UTC | 31 Aug 24 23:09 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-328486 stop                                                                   | multinode-328486 | jenkins | v1.33.1 | 31 Aug 24 23:09 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 23:06:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 23:06:01.776416   51160 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:06:01.776549   51160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:06:01.776559   51160 out.go:358] Setting ErrFile to fd 2...
	I0831 23:06:01.776565   51160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:06:01.776775   51160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:06:01.777317   51160 out.go:352] Setting JSON to false
	I0831 23:06:01.778349   51160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6509,"bootTime":1725139053,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 23:06:01.778412   51160 start.go:139] virtualization: kvm guest
	I0831 23:06:01.780741   51160 out.go:177] * [multinode-328486] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 23:06:01.782061   51160 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:06:01.782083   51160 notify.go:220] Checking for updates...
	I0831 23:06:01.784548   51160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:06:01.785813   51160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:06:01.787013   51160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:06:01.788388   51160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 23:06:01.789680   51160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:06:01.791646   51160 config.go:182] Loaded profile config "multinode-328486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:06:01.791758   51160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:06:01.792367   51160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:06:01.792465   51160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:06:01.807995   51160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0831 23:06:01.808508   51160 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:06:01.809004   51160 main.go:141] libmachine: Using API Version  1
	I0831 23:06:01.809024   51160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:06:01.809324   51160 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:06:01.809457   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:06:01.848611   51160 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 23:06:01.850113   51160 start.go:297] selected driver: kvm2
	I0831 23:06:01.850131   51160 start.go:901] validating driver "kvm2" against &{Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:06:01.850281   51160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:06:01.850613   51160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:06:01.850695   51160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 23:06:01.866302   51160 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 23:06:01.867016   51160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:06:01.867057   51160 cni.go:84] Creating CNI manager for ""
	I0831 23:06:01.867070   51160 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0831 23:06:01.867139   51160 start.go:340] cluster config:
	{Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-328486 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:06:01.867297   51160 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:06:01.869073   51160 out.go:177] * Starting "multinode-328486" primary control-plane node in "multinode-328486" cluster
	I0831 23:06:01.870698   51160 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:06:01.870742   51160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 23:06:01.870749   51160 cache.go:56] Caching tarball of preloaded images
	I0831 23:06:01.870838   51160 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 23:06:01.870848   51160 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:06:01.870973   51160 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/config.json ...
	I0831 23:06:01.871234   51160 start.go:360] acquireMachinesLock for multinode-328486: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 23:06:01.871289   51160 start.go:364] duration metric: took 33.863µs to acquireMachinesLock for "multinode-328486"
	I0831 23:06:01.871308   51160 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:06:01.871315   51160 fix.go:54] fixHost starting: 
	I0831 23:06:01.871660   51160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:06:01.871694   51160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:06:01.886247   51160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I0831 23:06:01.886690   51160 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:06:01.887258   51160 main.go:141] libmachine: Using API Version  1
	I0831 23:06:01.887279   51160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:06:01.887615   51160 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:06:01.887824   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:06:01.888045   51160 main.go:141] libmachine: (multinode-328486) Calling .GetState
	I0831 23:06:01.889924   51160 fix.go:112] recreateIfNeeded on multinode-328486: state=Running err=<nil>
	W0831 23:06:01.889944   51160 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:06:01.891965   51160 out.go:177] * Updating the running kvm2 "multinode-328486" VM ...
	I0831 23:06:01.893267   51160 machine.go:93] provisionDockerMachine start ...
	I0831 23:06:01.893288   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:06:01.893564   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:01.896284   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:01.896803   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:01.896863   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:01.896974   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:01.897162   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:01.897328   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:01.897431   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:01.897562   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:01.897796   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:01.897813   51160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:06:02.000644   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328486
	
	I0831 23:06:02.000670   51160 main.go:141] libmachine: (multinode-328486) Calling .GetMachineName
	I0831 23:06:02.000906   51160 buildroot.go:166] provisioning hostname "multinode-328486"
	I0831 23:06:02.000930   51160 main.go:141] libmachine: (multinode-328486) Calling .GetMachineName
	I0831 23:06:02.001139   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.003882   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.004244   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.004274   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.004410   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.004580   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.004740   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.004884   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.005047   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:02.005218   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:02.005230   51160 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328486 && echo "multinode-328486" | sudo tee /etc/hostname
	I0831 23:06:02.121699   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328486
	
	I0831 23:06:02.121728   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.124650   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.124979   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.124999   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.125166   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.125353   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.125526   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.125657   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.125874   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:02.126042   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:02.126061   51160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-328486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-328486/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-328486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:06:02.228447   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:06:02.228477   51160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 23:06:02.228515   51160 buildroot.go:174] setting up certificates
	I0831 23:06:02.228526   51160 provision.go:84] configureAuth start
	I0831 23:06:02.228539   51160 main.go:141] libmachine: (multinode-328486) Calling .GetMachineName
	I0831 23:06:02.228825   51160 main.go:141] libmachine: (multinode-328486) Calling .GetIP
	I0831 23:06:02.231186   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.231567   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.231589   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.231709   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.233944   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.234349   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.234379   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.234517   51160 provision.go:143] copyHostCerts
	I0831 23:06:02.234547   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 23:06:02.234582   51160 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 23:06:02.234600   51160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 23:06:02.234665   51160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 23:06:02.234753   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 23:06:02.234770   51160 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 23:06:02.234776   51160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 23:06:02.234800   51160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 23:06:02.234854   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 23:06:02.234870   51160 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 23:06:02.234876   51160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 23:06:02.234897   51160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 23:06:02.234953   51160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.multinode-328486 san=[127.0.0.1 192.168.39.107 localhost minikube multinode-328486]
	I0831 23:06:02.359379   51160 provision.go:177] copyRemoteCerts
	I0831 23:06:02.359431   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:06:02.359451   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.361856   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.362216   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.362238   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.362461   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.362656   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.362811   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.362946   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:06:02.442717   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:06:02.442777   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:06:02.469251   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:06:02.469321   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0831 23:06:02.502439   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:06:02.502506   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 23:06:02.529613   51160 provision.go:87] duration metric: took 301.075477ms to configureAuth
	I0831 23:06:02.529638   51160 buildroot.go:189] setting minikube options for container-runtime
	I0831 23:06:02.529838   51160 config.go:182] Loaded profile config "multinode-328486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:06:02.529899   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:06:02.532322   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.532618   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:06:02.532647   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:06:02.532783   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:06:02.532940   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.533078   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:06:02.533259   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:06:02.533403   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:06:02.533564   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:06:02.533583   51160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:07:33.286751   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:07:33.286778   51160 machine.go:96] duration metric: took 1m31.393500147s to provisionDockerMachine
	I0831 23:07:33.286789   51160 start.go:293] postStartSetup for "multinode-328486" (driver="kvm2")
	I0831 23:07:33.286800   51160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:07:33.286815   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.287096   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:07:33.287126   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.290388   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.290787   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.290814   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.290924   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.291092   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.291275   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.291418   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:07:33.376595   51160 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:07:33.381255   51160 command_runner.go:130] > NAME=Buildroot
	I0831 23:07:33.381283   51160 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0831 23:07:33.381288   51160 command_runner.go:130] > ID=buildroot
	I0831 23:07:33.381292   51160 command_runner.go:130] > VERSION_ID=2023.02.9
	I0831 23:07:33.381297   51160 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0831 23:07:33.381320   51160 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 23:07:33.381330   51160 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 23:07:33.381385   51160 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 23:07:33.381461   51160 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 23:07:33.381471   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /etc/ssl/certs/203692.pem
	I0831 23:07:33.381554   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:07:33.391422   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:07:33.416416   51160 start.go:296] duration metric: took 129.614537ms for postStartSetup
	I0831 23:07:33.416455   51160 fix.go:56] duration metric: took 1m31.545140659s for fixHost
	I0831 23:07:33.416473   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.419529   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.419905   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.419939   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.420114   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.420313   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.420479   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.420661   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.420849   51160 main.go:141] libmachine: Using SSH client type: native
	I0831 23:07:33.421007   51160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0831 23:07:33.421017   51160 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 23:07:33.520202   51160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725145653.497677918
	
	I0831 23:07:33.520221   51160 fix.go:216] guest clock: 1725145653.497677918
	I0831 23:07:33.520228   51160 fix.go:229] Guest: 2024-08-31 23:07:33.497677918 +0000 UTC Remote: 2024-08-31 23:07:33.416459029 +0000 UTC m=+91.672998730 (delta=81.218889ms)
	I0831 23:07:33.520267   51160 fix.go:200] guest clock delta is within tolerance: 81.218889ms
	I0831 23:07:33.520273   51160 start.go:83] releasing machines lock for "multinode-328486", held for 1m31.648972087s
	I0831 23:07:33.520301   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.520592   51160 main.go:141] libmachine: (multinode-328486) Calling .GetIP
	I0831 23:07:33.523570   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.524087   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.524117   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.524255   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.524746   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.524944   51160 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:07:33.525036   51160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:07:33.525084   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.525213   51160 ssh_runner.go:195] Run: cat /version.json
	I0831 23:07:33.525235   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:07:33.528060   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.528406   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.528433   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.528452   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.528595   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.528751   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.528914   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.528955   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:33.528981   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:33.529049   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:07:33.529312   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:07:33.529469   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:07:33.529600   51160 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:07:33.529804   51160 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:07:33.627681   51160 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0831 23:07:33.627748   51160 command_runner.go:130] > {"iso_version": "v1.33.1-1724862017-19530", "kicbase_version": "v0.0.44-1724775115-19521", "minikube_version": "v1.33.1", "commit": "0ce952d110f81b7b94ba20c385955675855b59fb"}
	I0831 23:07:33.627889   51160 ssh_runner.go:195] Run: systemctl --version
	I0831 23:07:33.634129   51160 command_runner.go:130] > systemd 252 (252)
	I0831 23:07:33.634171   51160 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0831 23:07:33.634226   51160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:07:33.794731   51160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:07:33.800849   51160 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0831 23:07:33.800887   51160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 23:07:33.800938   51160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:07:33.810243   51160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:07:33.810265   51160 start.go:495] detecting cgroup driver to use...
	I0831 23:07:33.810335   51160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:07:33.830472   51160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:07:33.846944   51160 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:07:33.846993   51160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:07:33.860617   51160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:07:33.874145   51160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:07:34.040693   51160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:07:34.184641   51160 docker.go:233] disabling docker service ...
	I0831 23:07:34.184716   51160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:07:34.201443   51160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:07:34.215242   51160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:07:34.358285   51160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:07:34.500491   51160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:07:34.514531   51160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:07:34.535737   51160 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0831 23:07:34.535781   51160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:07:34.535826   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.546695   51160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:07:34.546765   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.557349   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.567594   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.577396   51160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:07:34.587675   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.597956   51160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.609097   51160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:07:34.619495   51160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:07:34.629416   51160 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0831 23:07:34.629474   51160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:07:34.639176   51160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:07:34.778744   51160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:07:35.980148   51160 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.201363499s)
	I0831 23:07:35.980178   51160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:07:35.980227   51160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:07:35.985438   51160 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0831 23:07:35.985458   51160 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0831 23:07:35.985465   51160 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0831 23:07:35.985471   51160 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0831 23:07:35.985476   51160 command_runner.go:130] > Access: 2024-08-31 23:07:35.855299816 +0000
	I0831 23:07:35.985484   51160 command_runner.go:130] > Modify: 2024-08-31 23:07:35.841299522 +0000
	I0831 23:07:35.985497   51160 command_runner.go:130] > Change: 2024-08-31 23:07:35.841299522 +0000
	I0831 23:07:35.985502   51160 command_runner.go:130] >  Birth: -
	I0831 23:07:35.985519   51160 start.go:563] Will wait 60s for crictl version
	I0831 23:07:35.985552   51160 ssh_runner.go:195] Run: which crictl
	I0831 23:07:35.989350   51160 command_runner.go:130] > /usr/bin/crictl
	I0831 23:07:35.989414   51160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:07:36.028153   51160 command_runner.go:130] > Version:  0.1.0
	I0831 23:07:36.028173   51160 command_runner.go:130] > RuntimeName:  cri-o
	I0831 23:07:36.028177   51160 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0831 23:07:36.028183   51160 command_runner.go:130] > RuntimeApiVersion:  v1
	I0831 23:07:36.030902   51160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 23:07:36.030979   51160 ssh_runner.go:195] Run: crio --version
	I0831 23:07:36.059163   51160 command_runner.go:130] > crio version 1.29.1
	I0831 23:07:36.059181   51160 command_runner.go:130] > Version:        1.29.1
	I0831 23:07:36.059188   51160 command_runner.go:130] > GitCommit:      unknown
	I0831 23:07:36.059213   51160 command_runner.go:130] > GitCommitDate:  unknown
	I0831 23:07:36.059220   51160 command_runner.go:130] > GitTreeState:   clean
	I0831 23:07:36.059228   51160 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0831 23:07:36.059234   51160 command_runner.go:130] > GoVersion:      go1.21.6
	I0831 23:07:36.059240   51160 command_runner.go:130] > Compiler:       gc
	I0831 23:07:36.059248   51160 command_runner.go:130] > Platform:       linux/amd64
	I0831 23:07:36.059257   51160 command_runner.go:130] > Linkmode:       dynamic
	I0831 23:07:36.059264   51160 command_runner.go:130] > BuildTags:      
	I0831 23:07:36.059272   51160 command_runner.go:130] >   containers_image_ostree_stub
	I0831 23:07:36.059279   51160 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0831 23:07:36.059286   51160 command_runner.go:130] >   btrfs_noversion
	I0831 23:07:36.059291   51160 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0831 23:07:36.059296   51160 command_runner.go:130] >   libdm_no_deferred_remove
	I0831 23:07:36.059300   51160 command_runner.go:130] >   seccomp
	I0831 23:07:36.059304   51160 command_runner.go:130] > LDFlags:          unknown
	I0831 23:07:36.059308   51160 command_runner.go:130] > SeccompEnabled:   true
	I0831 23:07:36.059312   51160 command_runner.go:130] > AppArmorEnabled:  false
	I0831 23:07:36.059410   51160 ssh_runner.go:195] Run: crio --version
	I0831 23:07:36.089109   51160 command_runner.go:130] > crio version 1.29.1
	I0831 23:07:36.089135   51160 command_runner.go:130] > Version:        1.29.1
	I0831 23:07:36.089144   51160 command_runner.go:130] > GitCommit:      unknown
	I0831 23:07:36.089150   51160 command_runner.go:130] > GitCommitDate:  unknown
	I0831 23:07:36.089156   51160 command_runner.go:130] > GitTreeState:   clean
	I0831 23:07:36.089164   51160 command_runner.go:130] > BuildDate:      2024-08-28T21:33:51Z
	I0831 23:07:36.089170   51160 command_runner.go:130] > GoVersion:      go1.21.6
	I0831 23:07:36.089177   51160 command_runner.go:130] > Compiler:       gc
	I0831 23:07:36.089184   51160 command_runner.go:130] > Platform:       linux/amd64
	I0831 23:07:36.089191   51160 command_runner.go:130] > Linkmode:       dynamic
	I0831 23:07:36.089198   51160 command_runner.go:130] > BuildTags:      
	I0831 23:07:36.089203   51160 command_runner.go:130] >   containers_image_ostree_stub
	I0831 23:07:36.089208   51160 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0831 23:07:36.089213   51160 command_runner.go:130] >   btrfs_noversion
	I0831 23:07:36.089217   51160 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0831 23:07:36.089233   51160 command_runner.go:130] >   libdm_no_deferred_remove
	I0831 23:07:36.089237   51160 command_runner.go:130] >   seccomp
	I0831 23:07:36.089241   51160 command_runner.go:130] > LDFlags:          unknown
	I0831 23:07:36.089246   51160 command_runner.go:130] > SeccompEnabled:   true
	I0831 23:07:36.089253   51160 command_runner.go:130] > AppArmorEnabled:  false
	I0831 23:07:36.092605   51160 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 23:07:36.094483   51160 main.go:141] libmachine: (multinode-328486) Calling .GetIP
	I0831 23:07:36.097076   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:36.097525   51160 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:07:36.097557   51160 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:07:36.097806   51160 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0831 23:07:36.102182   51160 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0831 23:07:36.102268   51160 kubeadm.go:883] updating cluster {Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 23:07:36.102451   51160 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:07:36.102504   51160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:07:36.144241   51160 command_runner.go:130] > {
	I0831 23:07:36.144267   51160 command_runner.go:130] >   "images": [
	I0831 23:07:36.144272   51160 command_runner.go:130] >     {
	I0831 23:07:36.144285   51160 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0831 23:07:36.144292   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144300   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0831 23:07:36.144306   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144312   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144326   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0831 23:07:36.144335   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0831 23:07:36.144341   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144348   51160 command_runner.go:130] >       "size": "87165492",
	I0831 23:07:36.144354   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144360   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144375   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144383   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144390   51160 command_runner.go:130] >     },
	I0831 23:07:36.144395   51160 command_runner.go:130] >     {
	I0831 23:07:36.144404   51160 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0831 23:07:36.144412   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144417   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0831 23:07:36.144421   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144425   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144432   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0831 23:07:36.144439   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0831 23:07:36.144444   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144452   51160 command_runner.go:130] >       "size": "87190579",
	I0831 23:07:36.144458   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144471   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144480   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144487   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144496   51160 command_runner.go:130] >     },
	I0831 23:07:36.144502   51160 command_runner.go:130] >     {
	I0831 23:07:36.144520   51160 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0831 23:07:36.144528   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144534   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0831 23:07:36.144538   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144542   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144551   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0831 23:07:36.144566   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0831 23:07:36.144573   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144580   51160 command_runner.go:130] >       "size": "1363676",
	I0831 23:07:36.144588   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144595   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144604   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144614   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144619   51160 command_runner.go:130] >     },
	I0831 23:07:36.144641   51160 command_runner.go:130] >     {
	I0831 23:07:36.144668   51160 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0831 23:07:36.144678   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144687   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0831 23:07:36.144692   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144698   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144708   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0831 23:07:36.144725   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0831 23:07:36.144733   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144739   51160 command_runner.go:130] >       "size": "31470524",
	I0831 23:07:36.144746   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144753   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144761   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144768   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144777   51160 command_runner.go:130] >     },
	I0831 23:07:36.144782   51160 command_runner.go:130] >     {
	I0831 23:07:36.144795   51160 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0831 23:07:36.144804   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144812   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0831 23:07:36.144820   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144826   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144839   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0831 23:07:36.144852   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0831 23:07:36.144858   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144862   51160 command_runner.go:130] >       "size": "61245718",
	I0831 23:07:36.144868   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.144872   51160 command_runner.go:130] >       "username": "nonroot",
	I0831 23:07:36.144877   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144881   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144885   51160 command_runner.go:130] >     },
	I0831 23:07:36.144888   51160 command_runner.go:130] >     {
	I0831 23:07:36.144894   51160 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0831 23:07:36.144900   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144905   51160 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0831 23:07:36.144909   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144913   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.144922   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0831 23:07:36.144931   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0831 23:07:36.144936   51160 command_runner.go:130] >       ],
	I0831 23:07:36.144940   51160 command_runner.go:130] >       "size": "149009664",
	I0831 23:07:36.144946   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.144950   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.144953   51160 command_runner.go:130] >       },
	I0831 23:07:36.144959   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.144963   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.144968   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.144971   51160 command_runner.go:130] >     },
	I0831 23:07:36.144975   51160 command_runner.go:130] >     {
	I0831 23:07:36.144982   51160 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0831 23:07:36.144986   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.144993   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0831 23:07:36.144998   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145002   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145011   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0831 23:07:36.145020   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0831 23:07:36.145025   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145029   51160 command_runner.go:130] >       "size": "95233506",
	I0831 23:07:36.145035   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145044   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.145049   51160 command_runner.go:130] >       },
	I0831 23:07:36.145053   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145057   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145063   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145067   51160 command_runner.go:130] >     },
	I0831 23:07:36.145072   51160 command_runner.go:130] >     {
	I0831 23:07:36.145078   51160 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0831 23:07:36.145084   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145089   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0831 23:07:36.145095   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145099   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145126   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0831 23:07:36.145136   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0831 23:07:36.145141   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145145   51160 command_runner.go:130] >       "size": "89437512",
	I0831 23:07:36.145151   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145155   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.145161   51160 command_runner.go:130] >       },
	I0831 23:07:36.145165   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145168   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145172   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145175   51160 command_runner.go:130] >     },
	I0831 23:07:36.145179   51160 command_runner.go:130] >     {
	I0831 23:07:36.145186   51160 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0831 23:07:36.145190   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145194   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0831 23:07:36.145198   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145201   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145208   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0831 23:07:36.145214   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0831 23:07:36.145218   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145222   51160 command_runner.go:130] >       "size": "92728217",
	I0831 23:07:36.145225   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.145229   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145234   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145242   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145245   51160 command_runner.go:130] >     },
	I0831 23:07:36.145248   51160 command_runner.go:130] >     {
	I0831 23:07:36.145254   51160 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0831 23:07:36.145257   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145262   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0831 23:07:36.145268   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145272   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145281   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0831 23:07:36.145290   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0831 23:07:36.145295   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145299   51160 command_runner.go:130] >       "size": "68420936",
	I0831 23:07:36.145305   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145309   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.145314   51160 command_runner.go:130] >       },
	I0831 23:07:36.145319   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145324   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145328   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.145334   51160 command_runner.go:130] >     },
	I0831 23:07:36.145337   51160 command_runner.go:130] >     {
	I0831 23:07:36.145342   51160 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0831 23:07:36.145348   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.145355   51160 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0831 23:07:36.145360   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145369   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.145378   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0831 23:07:36.145386   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0831 23:07:36.145392   51160 command_runner.go:130] >       ],
	I0831 23:07:36.145396   51160 command_runner.go:130] >       "size": "742080",
	I0831 23:07:36.145402   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.145406   51160 command_runner.go:130] >         "value": "65535"
	I0831 23:07:36.145411   51160 command_runner.go:130] >       },
	I0831 23:07:36.145415   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.145421   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.145426   51160 command_runner.go:130] >       "pinned": true
	I0831 23:07:36.145431   51160 command_runner.go:130] >     }
	I0831 23:07:36.145439   51160 command_runner.go:130] >   ]
	I0831 23:07:36.145445   51160 command_runner.go:130] > }
	I0831 23:07:36.145639   51160 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:07:36.145663   51160 crio.go:433] Images already preloaded, skipping extraction
	I0831 23:07:36.145717   51160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:07:36.178073   51160 command_runner.go:130] > {
	I0831 23:07:36.178094   51160 command_runner.go:130] >   "images": [
	I0831 23:07:36.178098   51160 command_runner.go:130] >     {
	I0831 23:07:36.178105   51160 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0831 23:07:36.178110   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178119   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0831 23:07:36.178124   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178131   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178145   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0831 23:07:36.178156   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0831 23:07:36.178161   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178166   51160 command_runner.go:130] >       "size": "87165492",
	I0831 23:07:36.178170   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178175   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178187   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178195   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178198   51160 command_runner.go:130] >     },
	I0831 23:07:36.178202   51160 command_runner.go:130] >     {
	I0831 23:07:36.178211   51160 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0831 23:07:36.178218   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178227   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0831 23:07:36.178236   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178243   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178256   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0831 23:07:36.178266   51160 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0831 23:07:36.178272   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178276   51160 command_runner.go:130] >       "size": "87190579",
	I0831 23:07:36.178280   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178293   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178302   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178312   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178321   51160 command_runner.go:130] >     },
	I0831 23:07:36.178328   51160 command_runner.go:130] >     {
	I0831 23:07:36.178344   51160 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0831 23:07:36.178354   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178362   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0831 23:07:36.178365   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178372   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178379   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0831 23:07:36.178393   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0831 23:07:36.178403   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178419   51160 command_runner.go:130] >       "size": "1363676",
	I0831 23:07:36.178428   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178438   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178450   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178460   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178464   51160 command_runner.go:130] >     },
	I0831 23:07:36.178472   51160 command_runner.go:130] >     {
	I0831 23:07:36.178481   51160 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0831 23:07:36.178491   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178502   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0831 23:07:36.178511   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178521   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178536   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0831 23:07:36.178556   51160 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0831 23:07:36.178565   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178574   51160 command_runner.go:130] >       "size": "31470524",
	I0831 23:07:36.178584   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178593   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178602   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178611   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178619   51160 command_runner.go:130] >     },
	I0831 23:07:36.178626   51160 command_runner.go:130] >     {
	I0831 23:07:36.178635   51160 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0831 23:07:36.178643   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178650   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0831 23:07:36.178658   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178665   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178680   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0831 23:07:36.178704   51160 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0831 23:07:36.178712   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178722   51160 command_runner.go:130] >       "size": "61245718",
	I0831 23:07:36.178730   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.178738   51160 command_runner.go:130] >       "username": "nonroot",
	I0831 23:07:36.178746   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178756   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178764   51160 command_runner.go:130] >     },
	I0831 23:07:36.178772   51160 command_runner.go:130] >     {
	I0831 23:07:36.178782   51160 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0831 23:07:36.178790   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178801   51160 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0831 23:07:36.178809   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178817   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178825   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0831 23:07:36.178839   51160 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0831 23:07:36.178848   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178858   51160 command_runner.go:130] >       "size": "149009664",
	I0831 23:07:36.178866   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.178874   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.178885   51160 command_runner.go:130] >       },
	I0831 23:07:36.178895   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.178903   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.178909   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.178915   51160 command_runner.go:130] >     },
	I0831 23:07:36.178923   51160 command_runner.go:130] >     {
	I0831 23:07:36.178937   51160 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0831 23:07:36.178946   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.178955   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0831 23:07:36.178962   51160 command_runner.go:130] >       ],
	I0831 23:07:36.178969   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.178987   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0831 23:07:36.179003   51160 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0831 23:07:36.179011   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179020   51160 command_runner.go:130] >       "size": "95233506",
	I0831 23:07:36.179029   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179046   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.179054   51160 command_runner.go:130] >       },
	I0831 23:07:36.179063   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179073   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179079   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179085   51160 command_runner.go:130] >     },
	I0831 23:07:36.179090   51160 command_runner.go:130] >     {
	I0831 23:07:36.179101   51160 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0831 23:07:36.179111   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179122   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0831 23:07:36.179130   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179137   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179166   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0831 23:07:36.179178   51160 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0831 23:07:36.179186   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179196   51160 command_runner.go:130] >       "size": "89437512",
	I0831 23:07:36.179205   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179214   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.179219   51160 command_runner.go:130] >       },
	I0831 23:07:36.179228   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179237   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179246   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179255   51160 command_runner.go:130] >     },
	I0831 23:07:36.179261   51160 command_runner.go:130] >     {
	I0831 23:07:36.179269   51160 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0831 23:07:36.179279   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179290   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0831 23:07:36.179298   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179306   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179320   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0831 23:07:36.179346   51160 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0831 23:07:36.179355   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179361   51160 command_runner.go:130] >       "size": "92728217",
	I0831 23:07:36.179370   51160 command_runner.go:130] >       "uid": null,
	I0831 23:07:36.179377   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179386   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179403   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179416   51160 command_runner.go:130] >     },
	I0831 23:07:36.179432   51160 command_runner.go:130] >     {
	I0831 23:07:36.179445   51160 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0831 23:07:36.179453   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179464   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0831 23:07:36.179473   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179480   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179491   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0831 23:07:36.179504   51160 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0831 23:07:36.179513   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179520   51160 command_runner.go:130] >       "size": "68420936",
	I0831 23:07:36.179528   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179538   51160 command_runner.go:130] >         "value": "0"
	I0831 23:07:36.179545   51160 command_runner.go:130] >       },
	I0831 23:07:36.179554   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179563   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179571   51160 command_runner.go:130] >       "pinned": false
	I0831 23:07:36.179577   51160 command_runner.go:130] >     },
	I0831 23:07:36.179595   51160 command_runner.go:130] >     {
	I0831 23:07:36.179609   51160 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0831 23:07:36.179624   51160 command_runner.go:130] >       "repoTags": [
	I0831 23:07:36.179639   51160 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0831 23:07:36.179648   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179657   51160 command_runner.go:130] >       "repoDigests": [
	I0831 23:07:36.179667   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0831 23:07:36.179681   51160 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0831 23:07:36.179689   51160 command_runner.go:130] >       ],
	I0831 23:07:36.179699   51160 command_runner.go:130] >       "size": "742080",
	I0831 23:07:36.179707   51160 command_runner.go:130] >       "uid": {
	I0831 23:07:36.179713   51160 command_runner.go:130] >         "value": "65535"
	I0831 23:07:36.179722   51160 command_runner.go:130] >       },
	I0831 23:07:36.179730   51160 command_runner.go:130] >       "username": "",
	I0831 23:07:36.179739   51160 command_runner.go:130] >       "spec": null,
	I0831 23:07:36.179747   51160 command_runner.go:130] >       "pinned": true
	I0831 23:07:36.179754   51160 command_runner.go:130] >     }
	I0831 23:07:36.179764   51160 command_runner.go:130] >   ]
	I0831 23:07:36.179771   51160 command_runner.go:130] > }
	I0831 23:07:36.179943   51160 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:07:36.179958   51160 cache_images.go:84] Images are preloaded, skipping loading
	I0831 23:07:36.179967   51160 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0831 23:07:36.180091   51160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-328486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:07:36.180172   51160 ssh_runner.go:195] Run: crio config
	I0831 23:07:36.212371   51160 command_runner.go:130] ! time="2024-08-31 23:07:36.189798002Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0831 23:07:36.218707   51160 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0831 23:07:36.226861   51160 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0831 23:07:36.226881   51160 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0831 23:07:36.226888   51160 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0831 23:07:36.226891   51160 command_runner.go:130] > #
	I0831 23:07:36.226900   51160 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0831 23:07:36.226907   51160 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0831 23:07:36.226913   51160 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0831 23:07:36.226920   51160 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0831 23:07:36.226924   51160 command_runner.go:130] > # reload'.
	I0831 23:07:36.226930   51160 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0831 23:07:36.226936   51160 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0831 23:07:36.226942   51160 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0831 23:07:36.226947   51160 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0831 23:07:36.226953   51160 command_runner.go:130] > [crio]
	I0831 23:07:36.226959   51160 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0831 23:07:36.226965   51160 command_runner.go:130] > # containers images, in this directory.
	I0831 23:07:36.226969   51160 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0831 23:07:36.226980   51160 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0831 23:07:36.226988   51160 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0831 23:07:36.226995   51160 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0831 23:07:36.226998   51160 command_runner.go:130] > # imagestore = ""
	I0831 23:07:36.227004   51160 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0831 23:07:36.227013   51160 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0831 23:07:36.227017   51160 command_runner.go:130] > storage_driver = "overlay"
	I0831 23:07:36.227023   51160 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0831 23:07:36.227035   51160 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0831 23:07:36.227044   51160 command_runner.go:130] > storage_option = [
	I0831 23:07:36.227051   51160 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0831 23:07:36.227054   51160 command_runner.go:130] > ]
	I0831 23:07:36.227061   51160 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0831 23:07:36.227069   51160 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0831 23:07:36.227079   51160 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0831 23:07:36.227091   51160 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0831 23:07:36.227098   51160 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0831 23:07:36.227103   51160 command_runner.go:130] > # always happen on a node reboot
	I0831 23:07:36.227108   51160 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0831 23:07:36.227121   51160 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0831 23:07:36.227128   51160 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0831 23:07:36.227136   51160 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0831 23:07:36.227141   51160 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0831 23:07:36.227150   51160 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0831 23:07:36.227160   51160 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0831 23:07:36.227166   51160 command_runner.go:130] > # internal_wipe = true
	I0831 23:07:36.227174   51160 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0831 23:07:36.227192   51160 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0831 23:07:36.227198   51160 command_runner.go:130] > # internal_repair = false
	I0831 23:07:36.227203   51160 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0831 23:07:36.227211   51160 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0831 23:07:36.227217   51160 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0831 23:07:36.227223   51160 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0831 23:07:36.227229   51160 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0831 23:07:36.227235   51160 command_runner.go:130] > [crio.api]
	I0831 23:07:36.227240   51160 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0831 23:07:36.227247   51160 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0831 23:07:36.227252   51160 command_runner.go:130] > # IP address on which the stream server will listen.
	I0831 23:07:36.227258   51160 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0831 23:07:36.227265   51160 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0831 23:07:36.227272   51160 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0831 23:07:36.227276   51160 command_runner.go:130] > # stream_port = "0"
	I0831 23:07:36.227283   51160 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0831 23:07:36.227287   51160 command_runner.go:130] > # stream_enable_tls = false
	I0831 23:07:36.227299   51160 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0831 23:07:36.227306   51160 command_runner.go:130] > # stream_idle_timeout = ""
	I0831 23:07:36.227314   51160 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0831 23:07:36.227333   51160 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0831 23:07:36.227341   51160 command_runner.go:130] > # minutes.
	I0831 23:07:36.227347   51160 command_runner.go:130] > # stream_tls_cert = ""
	I0831 23:07:36.227355   51160 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0831 23:07:36.227368   51160 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0831 23:07:36.227376   51160 command_runner.go:130] > # stream_tls_key = ""
	I0831 23:07:36.227384   51160 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0831 23:07:36.227391   51160 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0831 23:07:36.227411   51160 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0831 23:07:36.227417   51160 command_runner.go:130] > # stream_tls_ca = ""
	I0831 23:07:36.227424   51160 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0831 23:07:36.227429   51160 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0831 23:07:36.227435   51160 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0831 23:07:36.227442   51160 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0831 23:07:36.227450   51160 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0831 23:07:36.227458   51160 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0831 23:07:36.227461   51160 command_runner.go:130] > [crio.runtime]
	I0831 23:07:36.227470   51160 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0831 23:07:36.227477   51160 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0831 23:07:36.227481   51160 command_runner.go:130] > # "nofile=1024:2048"
	I0831 23:07:36.227489   51160 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0831 23:07:36.227494   51160 command_runner.go:130] > # default_ulimits = [
	I0831 23:07:36.227497   51160 command_runner.go:130] > # ]
	I0831 23:07:36.227503   51160 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0831 23:07:36.227509   51160 command_runner.go:130] > # no_pivot = false
	I0831 23:07:36.227514   51160 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0831 23:07:36.227522   51160 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0831 23:07:36.227529   51160 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0831 23:07:36.227534   51160 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0831 23:07:36.227542   51160 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0831 23:07:36.227548   51160 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0831 23:07:36.227555   51160 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0831 23:07:36.227559   51160 command_runner.go:130] > # Cgroup setting for conmon
	I0831 23:07:36.227572   51160 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0831 23:07:36.227579   51160 command_runner.go:130] > conmon_cgroup = "pod"
	I0831 23:07:36.227584   51160 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0831 23:07:36.227592   51160 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0831 23:07:36.227600   51160 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0831 23:07:36.227605   51160 command_runner.go:130] > conmon_env = [
	I0831 23:07:36.227611   51160 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0831 23:07:36.227616   51160 command_runner.go:130] > ]
	I0831 23:07:36.227623   51160 command_runner.go:130] > # Additional environment variables to set for all the
	I0831 23:07:36.227633   51160 command_runner.go:130] > # containers. These are overridden if set in the
	I0831 23:07:36.227644   51160 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0831 23:07:36.227652   51160 command_runner.go:130] > # default_env = [
	I0831 23:07:36.227658   51160 command_runner.go:130] > # ]
	I0831 23:07:36.227669   51160 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0831 23:07:36.227683   51160 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0831 23:07:36.227693   51160 command_runner.go:130] > # selinux = false
	I0831 23:07:36.227702   51160 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0831 23:07:36.227714   51160 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0831 23:07:36.227726   51160 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0831 23:07:36.227734   51160 command_runner.go:130] > # seccomp_profile = ""
	I0831 23:07:36.227740   51160 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0831 23:07:36.227747   51160 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0831 23:07:36.227753   51160 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0831 23:07:36.227759   51160 command_runner.go:130] > # which might increase security.
	I0831 23:07:36.227764   51160 command_runner.go:130] > # This option is currently deprecated,
	I0831 23:07:36.227769   51160 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0831 23:07:36.227776   51160 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0831 23:07:36.227782   51160 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0831 23:07:36.227789   51160 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0831 23:07:36.227796   51160 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0831 23:07:36.227803   51160 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0831 23:07:36.227809   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.227815   51160 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0831 23:07:36.227821   51160 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0831 23:07:36.227827   51160 command_runner.go:130] > # the cgroup blockio controller.
	I0831 23:07:36.227831   51160 command_runner.go:130] > # blockio_config_file = ""
	I0831 23:07:36.227847   51160 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0831 23:07:36.227853   51160 command_runner.go:130] > # blockio parameters.
	I0831 23:07:36.227856   51160 command_runner.go:130] > # blockio_reload = false
	I0831 23:07:36.227863   51160 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0831 23:07:36.227869   51160 command_runner.go:130] > # irqbalance daemon.
	I0831 23:07:36.227874   51160 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0831 23:07:36.227883   51160 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0831 23:07:36.227893   51160 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0831 23:07:36.227902   51160 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0831 23:07:36.227909   51160 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0831 23:07:36.227918   51160 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0831 23:07:36.227923   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.227929   51160 command_runner.go:130] > # rdt_config_file = ""
	I0831 23:07:36.227933   51160 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0831 23:07:36.227940   51160 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0831 23:07:36.227969   51160 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0831 23:07:36.227975   51160 command_runner.go:130] > # separate_pull_cgroup = ""
	I0831 23:07:36.227981   51160 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0831 23:07:36.227987   51160 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0831 23:07:36.227990   51160 command_runner.go:130] > # will be added.
	I0831 23:07:36.227997   51160 command_runner.go:130] > # default_capabilities = [
	I0831 23:07:36.228000   51160 command_runner.go:130] > # 	"CHOWN",
	I0831 23:07:36.228006   51160 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0831 23:07:36.228010   51160 command_runner.go:130] > # 	"FSETID",
	I0831 23:07:36.228016   51160 command_runner.go:130] > # 	"FOWNER",
	I0831 23:07:36.228020   51160 command_runner.go:130] > # 	"SETGID",
	I0831 23:07:36.228025   51160 command_runner.go:130] > # 	"SETUID",
	I0831 23:07:36.228029   51160 command_runner.go:130] > # 	"SETPCAP",
	I0831 23:07:36.228032   51160 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0831 23:07:36.228038   51160 command_runner.go:130] > # 	"KILL",
	I0831 23:07:36.228041   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228050   51160 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0831 23:07:36.228058   51160 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0831 23:07:36.228064   51160 command_runner.go:130] > # add_inheritable_capabilities = false
	I0831 23:07:36.228071   51160 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0831 23:07:36.228078   51160 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0831 23:07:36.228095   51160 command_runner.go:130] > default_sysctls = [
	I0831 23:07:36.228102   51160 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0831 23:07:36.228106   51160 command_runner.go:130] > ]
	I0831 23:07:36.228111   51160 command_runner.go:130] > # List of devices on the host that a
	I0831 23:07:36.228117   51160 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0831 23:07:36.228123   51160 command_runner.go:130] > # allowed_devices = [
	I0831 23:07:36.228127   51160 command_runner.go:130] > # 	"/dev/fuse",
	I0831 23:07:36.228132   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228136   51160 command_runner.go:130] > # List of additional devices. specified as
	I0831 23:07:36.228145   51160 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0831 23:07:36.228156   51160 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0831 23:07:36.228166   51160 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0831 23:07:36.228172   51160 command_runner.go:130] > # additional_devices = [
	I0831 23:07:36.228175   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228182   51160 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0831 23:07:36.228186   51160 command_runner.go:130] > # cdi_spec_dirs = [
	I0831 23:07:36.228192   51160 command_runner.go:130] > # 	"/etc/cdi",
	I0831 23:07:36.228195   51160 command_runner.go:130] > # 	"/var/run/cdi",
	I0831 23:07:36.228201   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228207   51160 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0831 23:07:36.228214   51160 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0831 23:07:36.228221   51160 command_runner.go:130] > # Defaults to false.
	I0831 23:07:36.228226   51160 command_runner.go:130] > # device_ownership_from_security_context = false
	I0831 23:07:36.228234   51160 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0831 23:07:36.228242   51160 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0831 23:07:36.228248   51160 command_runner.go:130] > # hooks_dir = [
	I0831 23:07:36.228253   51160 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0831 23:07:36.228258   51160 command_runner.go:130] > # ]
	I0831 23:07:36.228263   51160 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0831 23:07:36.228271   51160 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0831 23:07:36.228279   51160 command_runner.go:130] > # its default mounts from the following two files:
	I0831 23:07:36.228284   51160 command_runner.go:130] > #
	I0831 23:07:36.228292   51160 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0831 23:07:36.228298   51160 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0831 23:07:36.228305   51160 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0831 23:07:36.228309   51160 command_runner.go:130] > #
	I0831 23:07:36.228323   51160 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0831 23:07:36.228331   51160 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0831 23:07:36.228341   51160 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0831 23:07:36.228348   51160 command_runner.go:130] > #      only add mounts it finds in this file.
	I0831 23:07:36.228351   51160 command_runner.go:130] > #
	I0831 23:07:36.228355   51160 command_runner.go:130] > # default_mounts_file = ""
	I0831 23:07:36.228362   51160 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0831 23:07:36.228368   51160 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0831 23:07:36.228374   51160 command_runner.go:130] > pids_limit = 1024
	I0831 23:07:36.228380   51160 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0831 23:07:36.228387   51160 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0831 23:07:36.228393   51160 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0831 23:07:36.228402   51160 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0831 23:07:36.228408   51160 command_runner.go:130] > # log_size_max = -1
	I0831 23:07:36.228415   51160 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0831 23:07:36.228424   51160 command_runner.go:130] > # log_to_journald = false
	I0831 23:07:36.228432   51160 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0831 23:07:36.228439   51160 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0831 23:07:36.228444   51160 command_runner.go:130] > # Path to directory for container attach sockets.
	I0831 23:07:36.228451   51160 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0831 23:07:36.228456   51160 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0831 23:07:36.228462   51160 command_runner.go:130] > # bind_mount_prefix = ""
	I0831 23:07:36.228467   51160 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0831 23:07:36.228473   51160 command_runner.go:130] > # read_only = false
	I0831 23:07:36.228479   51160 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0831 23:07:36.228487   51160 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0831 23:07:36.228493   51160 command_runner.go:130] > # live configuration reload.
	I0831 23:07:36.228497   51160 command_runner.go:130] > # log_level = "info"
	I0831 23:07:36.228504   51160 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0831 23:07:36.228511   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.228515   51160 command_runner.go:130] > # log_filter = ""
	I0831 23:07:36.228523   51160 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0831 23:07:36.228533   51160 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0831 23:07:36.228539   51160 command_runner.go:130] > # separated by comma.
	I0831 23:07:36.228546   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228552   51160 command_runner.go:130] > # uid_mappings = ""
	I0831 23:07:36.228562   51160 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0831 23:07:36.228571   51160 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0831 23:07:36.228576   51160 command_runner.go:130] > # separated by comma.
	I0831 23:07:36.228584   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228590   51160 command_runner.go:130] > # gid_mappings = ""
	I0831 23:07:36.228596   51160 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0831 23:07:36.228604   51160 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0831 23:07:36.228610   51160 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0831 23:07:36.228619   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228629   51160 command_runner.go:130] > # minimum_mappable_uid = -1
	I0831 23:07:36.228645   51160 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0831 23:07:36.228657   51160 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0831 23:07:36.228669   51160 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0831 23:07:36.228682   51160 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0831 23:07:36.228691   51160 command_runner.go:130] > # minimum_mappable_gid = -1
	I0831 23:07:36.228703   51160 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0831 23:07:36.228715   51160 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0831 23:07:36.228723   51160 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0831 23:07:36.228729   51160 command_runner.go:130] > # ctr_stop_timeout = 30
	I0831 23:07:36.228735   51160 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0831 23:07:36.228742   51160 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0831 23:07:36.228749   51160 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0831 23:07:36.228754   51160 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0831 23:07:36.228760   51160 command_runner.go:130] > drop_infra_ctr = false
	I0831 23:07:36.228766   51160 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0831 23:07:36.228771   51160 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0831 23:07:36.228780   51160 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0831 23:07:36.228786   51160 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0831 23:07:36.228793   51160 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0831 23:07:36.228801   51160 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0831 23:07:36.228807   51160 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0831 23:07:36.228814   51160 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0831 23:07:36.228818   51160 command_runner.go:130] > # shared_cpuset = ""
	I0831 23:07:36.228826   51160 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0831 23:07:36.228832   51160 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0831 23:07:36.228837   51160 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0831 23:07:36.228850   51160 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0831 23:07:36.228856   51160 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0831 23:07:36.228861   51160 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0831 23:07:36.228869   51160 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0831 23:07:36.228874   51160 command_runner.go:130] > # enable_criu_support = false
	I0831 23:07:36.228879   51160 command_runner.go:130] > # Enable/disable the generation of the container,
	I0831 23:07:36.228894   51160 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0831 23:07:36.228901   51160 command_runner.go:130] > # enable_pod_events = false
	I0831 23:07:36.228907   51160 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0831 23:07:36.228915   51160 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0831 23:07:36.228920   51160 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0831 23:07:36.228926   51160 command_runner.go:130] > # default_runtime = "runc"
	I0831 23:07:36.228931   51160 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0831 23:07:36.228940   51160 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0831 23:07:36.228955   51160 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0831 23:07:36.228968   51160 command_runner.go:130] > # creation as a file is not desired either.
	I0831 23:07:36.228983   51160 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0831 23:07:36.228994   51160 command_runner.go:130] > # the hostname is being managed dynamically.
	I0831 23:07:36.229005   51160 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0831 23:07:36.229022   51160 command_runner.go:130] > # ]
	I0831 23:07:36.229034   51160 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0831 23:07:36.229047   51160 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0831 23:07:36.229058   51160 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0831 23:07:36.229065   51160 command_runner.go:130] > # Each entry in the table should follow the format:
	I0831 23:07:36.229069   51160 command_runner.go:130] > #
	I0831 23:07:36.229074   51160 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0831 23:07:36.229082   51160 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0831 23:07:36.229129   51160 command_runner.go:130] > # runtime_type = "oci"
	I0831 23:07:36.229136   51160 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0831 23:07:36.229141   51160 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0831 23:07:36.229145   51160 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0831 23:07:36.229148   51160 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0831 23:07:36.229152   51160 command_runner.go:130] > # monitor_env = []
	I0831 23:07:36.229158   51160 command_runner.go:130] > # privileged_without_host_devices = false
	I0831 23:07:36.229162   51160 command_runner.go:130] > # allowed_annotations = []
	I0831 23:07:36.229169   51160 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0831 23:07:36.229179   51160 command_runner.go:130] > # Where:
	I0831 23:07:36.229187   51160 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0831 23:07:36.229193   51160 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0831 23:07:36.229201   51160 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0831 23:07:36.229209   51160 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0831 23:07:36.229214   51160 command_runner.go:130] > #   in $PATH.
	I0831 23:07:36.229220   51160 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0831 23:07:36.229226   51160 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0831 23:07:36.229232   51160 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0831 23:07:36.229238   51160 command_runner.go:130] > #   state.
	I0831 23:07:36.229245   51160 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0831 23:07:36.229253   51160 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0831 23:07:36.229260   51160 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0831 23:07:36.229267   51160 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0831 23:07:36.229273   51160 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0831 23:07:36.229281   51160 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0831 23:07:36.229289   51160 command_runner.go:130] > #   The currently recognized values are:
	I0831 23:07:36.229298   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0831 23:07:36.229307   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0831 23:07:36.229314   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0831 23:07:36.229321   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0831 23:07:36.229330   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0831 23:07:36.229339   51160 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0831 23:07:36.229348   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0831 23:07:36.229354   51160 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0831 23:07:36.229361   51160 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0831 23:07:36.229367   51160 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0831 23:07:36.229373   51160 command_runner.go:130] > #   deprecated option "conmon".
	I0831 23:07:36.229382   51160 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0831 23:07:36.229389   51160 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0831 23:07:36.229395   51160 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0831 23:07:36.229402   51160 command_runner.go:130] > #   should be moved to the container's cgroup
	I0831 23:07:36.229408   51160 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0831 23:07:36.229415   51160 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0831 23:07:36.229421   51160 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0831 23:07:36.229428   51160 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0831 23:07:36.229438   51160 command_runner.go:130] > #
	I0831 23:07:36.229445   51160 command_runner.go:130] > # Using the seccomp notifier feature:
	I0831 23:07:36.229449   51160 command_runner.go:130] > #
	I0831 23:07:36.229454   51160 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0831 23:07:36.229462   51160 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0831 23:07:36.229467   51160 command_runner.go:130] > #
	I0831 23:07:36.229473   51160 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0831 23:07:36.229481   51160 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0831 23:07:36.229485   51160 command_runner.go:130] > #
	I0831 23:07:36.229490   51160 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0831 23:07:36.229496   51160 command_runner.go:130] > # feature.
	I0831 23:07:36.229499   51160 command_runner.go:130] > #
	I0831 23:07:36.229505   51160 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0831 23:07:36.229513   51160 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0831 23:07:36.229521   51160 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0831 23:07:36.229529   51160 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0831 23:07:36.229537   51160 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0831 23:07:36.229542   51160 command_runner.go:130] > #
	I0831 23:07:36.229548   51160 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0831 23:07:36.229556   51160 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0831 23:07:36.229561   51160 command_runner.go:130] > #
	I0831 23:07:36.229566   51160 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0831 23:07:36.229573   51160 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0831 23:07:36.229576   51160 command_runner.go:130] > #
	I0831 23:07:36.229582   51160 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0831 23:07:36.229590   51160 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0831 23:07:36.229593   51160 command_runner.go:130] > # limitation.
	I0831 23:07:36.229599   51160 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0831 23:07:36.229604   51160 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0831 23:07:36.229608   51160 command_runner.go:130] > runtime_type = "oci"
	I0831 23:07:36.229614   51160 command_runner.go:130] > runtime_root = "/run/runc"
	I0831 23:07:36.229618   51160 command_runner.go:130] > runtime_config_path = ""
	I0831 23:07:36.229625   51160 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0831 23:07:36.229632   51160 command_runner.go:130] > monitor_cgroup = "pod"
	I0831 23:07:36.229641   51160 command_runner.go:130] > monitor_exec_cgroup = ""
	I0831 23:07:36.229647   51160 command_runner.go:130] > monitor_env = [
	I0831 23:07:36.229665   51160 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0831 23:07:36.229672   51160 command_runner.go:130] > ]
	I0831 23:07:36.229680   51160 command_runner.go:130] > privileged_without_host_devices = false
	I0831 23:07:36.229692   51160 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0831 23:07:36.229703   51160 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0831 23:07:36.229715   51160 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0831 23:07:36.229726   51160 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0831 23:07:36.229737   51160 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0831 23:07:36.229742   51160 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0831 23:07:36.229752   51160 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0831 23:07:36.229761   51160 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0831 23:07:36.229767   51160 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0831 23:07:36.229774   51160 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0831 23:07:36.229777   51160 command_runner.go:130] > # Example:
	I0831 23:07:36.229781   51160 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0831 23:07:36.229785   51160 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0831 23:07:36.229795   51160 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0831 23:07:36.229799   51160 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0831 23:07:36.229802   51160 command_runner.go:130] > # cpuset = 0
	I0831 23:07:36.229806   51160 command_runner.go:130] > # cpushares = "0-1"
	I0831 23:07:36.229809   51160 command_runner.go:130] > # Where:
	I0831 23:07:36.229813   51160 command_runner.go:130] > # The workload name is workload-type.
	I0831 23:07:36.229820   51160 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0831 23:07:36.229824   51160 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0831 23:07:36.229829   51160 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0831 23:07:36.229837   51160 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0831 23:07:36.229842   51160 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0831 23:07:36.229847   51160 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0831 23:07:36.229852   51160 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0831 23:07:36.229857   51160 command_runner.go:130] > # Default value is set to true
	I0831 23:07:36.229861   51160 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0831 23:07:36.229866   51160 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0831 23:07:36.229870   51160 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0831 23:07:36.229874   51160 command_runner.go:130] > # Default value is set to 'false'
	I0831 23:07:36.229878   51160 command_runner.go:130] > # disable_hostport_mapping = false
	I0831 23:07:36.229883   51160 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0831 23:07:36.229891   51160 command_runner.go:130] > #
	I0831 23:07:36.229896   51160 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0831 23:07:36.229901   51160 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0831 23:07:36.229909   51160 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0831 23:07:36.229914   51160 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0831 23:07:36.229919   51160 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0831 23:07:36.229923   51160 command_runner.go:130] > [crio.image]
	I0831 23:07:36.229928   51160 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0831 23:07:36.229932   51160 command_runner.go:130] > # default_transport = "docker://"
	I0831 23:07:36.229938   51160 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0831 23:07:36.229943   51160 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0831 23:07:36.229948   51160 command_runner.go:130] > # global_auth_file = ""
	I0831 23:07:36.229953   51160 command_runner.go:130] > # The image used to instantiate infra containers.
	I0831 23:07:36.229957   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.229961   51160 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0831 23:07:36.229967   51160 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0831 23:07:36.229972   51160 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0831 23:07:36.229976   51160 command_runner.go:130] > # This option supports live configuration reload.
	I0831 23:07:36.229984   51160 command_runner.go:130] > # pause_image_auth_file = ""
	I0831 23:07:36.229991   51160 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0831 23:07:36.229997   51160 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0831 23:07:36.230004   51160 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0831 23:07:36.230010   51160 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0831 23:07:36.230016   51160 command_runner.go:130] > # pause_command = "/pause"
	I0831 23:07:36.230021   51160 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0831 23:07:36.230028   51160 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0831 23:07:36.230034   51160 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0831 23:07:36.230042   51160 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0831 23:07:36.230050   51160 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0831 23:07:36.230056   51160 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0831 23:07:36.230061   51160 command_runner.go:130] > # pinned_images = [
	I0831 23:07:36.230065   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230070   51160 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0831 23:07:36.230079   51160 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0831 23:07:36.230093   51160 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0831 23:07:36.230105   51160 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0831 23:07:36.230121   51160 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0831 23:07:36.230131   51160 command_runner.go:130] > # signature_policy = ""
	I0831 23:07:36.230138   51160 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0831 23:07:36.230149   51160 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0831 23:07:36.230160   51160 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0831 23:07:36.230170   51160 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0831 23:07:36.230180   51160 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0831 23:07:36.230186   51160 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0831 23:07:36.230194   51160 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0831 23:07:36.230205   51160 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0831 23:07:36.230210   51160 command_runner.go:130] > # changing them here.
	I0831 23:07:36.230218   51160 command_runner.go:130] > # insecure_registries = [
	I0831 23:07:36.230224   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230234   51160 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0831 23:07:36.230244   51160 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0831 23:07:36.230250   51160 command_runner.go:130] > # image_volumes = "mkdir"
	I0831 23:07:36.230260   51160 command_runner.go:130] > # Temporary directory to use for storing big files
	I0831 23:07:36.230267   51160 command_runner.go:130] > # big_files_temporary_dir = ""
	I0831 23:07:36.230282   51160 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0831 23:07:36.230292   51160 command_runner.go:130] > # CNI plugins.
	I0831 23:07:36.230297   51160 command_runner.go:130] > [crio.network]
	I0831 23:07:36.230306   51160 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0831 23:07:36.230314   51160 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0831 23:07:36.230318   51160 command_runner.go:130] > # cni_default_network = ""
	I0831 23:07:36.230326   51160 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0831 23:07:36.230330   51160 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0831 23:07:36.230338   51160 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0831 23:07:36.230343   51160 command_runner.go:130] > # plugin_dirs = [
	I0831 23:07:36.230347   51160 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0831 23:07:36.230350   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230355   51160 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0831 23:07:36.230362   51160 command_runner.go:130] > [crio.metrics]
	I0831 23:07:36.230366   51160 command_runner.go:130] > # Globally enable or disable metrics support.
	I0831 23:07:36.230371   51160 command_runner.go:130] > enable_metrics = true
	I0831 23:07:36.230375   51160 command_runner.go:130] > # Specify enabled metrics collectors.
	I0831 23:07:36.230380   51160 command_runner.go:130] > # Per default all metrics are enabled.
	I0831 23:07:36.230391   51160 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0831 23:07:36.230400   51160 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0831 23:07:36.230405   51160 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0831 23:07:36.230411   51160 command_runner.go:130] > # metrics_collectors = [
	I0831 23:07:36.230415   51160 command_runner.go:130] > # 	"operations",
	I0831 23:07:36.230423   51160 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0831 23:07:36.230430   51160 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0831 23:07:36.230434   51160 command_runner.go:130] > # 	"operations_errors",
	I0831 23:07:36.230440   51160 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0831 23:07:36.230444   51160 command_runner.go:130] > # 	"image_pulls_by_name",
	I0831 23:07:36.230450   51160 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0831 23:07:36.230454   51160 command_runner.go:130] > # 	"image_pulls_failures",
	I0831 23:07:36.230458   51160 command_runner.go:130] > # 	"image_pulls_successes",
	I0831 23:07:36.230464   51160 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0831 23:07:36.230468   51160 command_runner.go:130] > # 	"image_layer_reuse",
	I0831 23:07:36.230475   51160 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0831 23:07:36.230479   51160 command_runner.go:130] > # 	"containers_oom_total",
	I0831 23:07:36.230484   51160 command_runner.go:130] > # 	"containers_oom",
	I0831 23:07:36.230488   51160 command_runner.go:130] > # 	"processes_defunct",
	I0831 23:07:36.230494   51160 command_runner.go:130] > # 	"operations_total",
	I0831 23:07:36.230499   51160 command_runner.go:130] > # 	"operations_latency_seconds",
	I0831 23:07:36.230505   51160 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0831 23:07:36.230509   51160 command_runner.go:130] > # 	"operations_errors_total",
	I0831 23:07:36.230514   51160 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0831 23:07:36.230519   51160 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0831 23:07:36.230525   51160 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0831 23:07:36.230529   51160 command_runner.go:130] > # 	"image_pulls_success_total",
	I0831 23:07:36.230537   51160 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0831 23:07:36.230542   51160 command_runner.go:130] > # 	"containers_oom_count_total",
	I0831 23:07:36.230548   51160 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0831 23:07:36.230553   51160 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0831 23:07:36.230558   51160 command_runner.go:130] > # ]
	I0831 23:07:36.230563   51160 command_runner.go:130] > # The port on which the metrics server will listen.
	I0831 23:07:36.230569   51160 command_runner.go:130] > # metrics_port = 9090
	I0831 23:07:36.230574   51160 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0831 23:07:36.230580   51160 command_runner.go:130] > # metrics_socket = ""
	I0831 23:07:36.230591   51160 command_runner.go:130] > # The certificate for the secure metrics server.
	I0831 23:07:36.230599   51160 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0831 23:07:36.230607   51160 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0831 23:07:36.230614   51160 command_runner.go:130] > # certificate on any modification event.
	I0831 23:07:36.230618   51160 command_runner.go:130] > # metrics_cert = ""
	I0831 23:07:36.230628   51160 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0831 23:07:36.230644   51160 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0831 23:07:36.230653   51160 command_runner.go:130] > # metrics_key = ""
	I0831 23:07:36.230661   51160 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0831 23:07:36.230670   51160 command_runner.go:130] > [crio.tracing]
	I0831 23:07:36.230678   51160 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0831 23:07:36.230687   51160 command_runner.go:130] > # enable_tracing = false
	I0831 23:07:36.230695   51160 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0831 23:07:36.230703   51160 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0831 23:07:36.230714   51160 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0831 23:07:36.230724   51160 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0831 23:07:36.230731   51160 command_runner.go:130] > # CRI-O NRI configuration.
	I0831 23:07:36.230738   51160 command_runner.go:130] > [crio.nri]
	I0831 23:07:36.230743   51160 command_runner.go:130] > # Globally enable or disable NRI.
	I0831 23:07:36.230752   51160 command_runner.go:130] > # enable_nri = false
	I0831 23:07:36.230759   51160 command_runner.go:130] > # NRI socket to listen on.
	I0831 23:07:36.230764   51160 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0831 23:07:36.230770   51160 command_runner.go:130] > # NRI plugin directory to use.
	I0831 23:07:36.230774   51160 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0831 23:07:36.230779   51160 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0831 23:07:36.230784   51160 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0831 23:07:36.230792   51160 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0831 23:07:36.230796   51160 command_runner.go:130] > # nri_disable_connections = false
	I0831 23:07:36.230806   51160 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0831 23:07:36.230812   51160 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0831 23:07:36.230817   51160 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0831 23:07:36.230823   51160 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0831 23:07:36.230829   51160 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0831 23:07:36.230835   51160 command_runner.go:130] > [crio.stats]
	I0831 23:07:36.230844   51160 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0831 23:07:36.230851   51160 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0831 23:07:36.230860   51160 command_runner.go:130] > # stats_collection_period = 0
	I0831 23:07:36.231059   51160 cni.go:84] Creating CNI manager for ""
	I0831 23:07:36.231075   51160 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0831 23:07:36.231095   51160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 23:07:36.231117   51160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-328486 NodeName:multinode-328486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 23:07:36.231250   51160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-328486"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 23:07:36.231311   51160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:07:36.241716   51160 command_runner.go:130] > kubeadm
	I0831 23:07:36.241736   51160 command_runner.go:130] > kubectl
	I0831 23:07:36.241741   51160 command_runner.go:130] > kubelet
	I0831 23:07:36.241759   51160 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:07:36.241811   51160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 23:07:36.251054   51160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0831 23:07:36.268147   51160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:07:36.284351   51160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0831 23:07:36.301455   51160 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0831 23:07:36.305425   51160 command_runner.go:130] > 192.168.39.107	control-plane.minikube.internal
	I0831 23:07:36.305511   51160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:07:36.442373   51160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:07:36.457168   51160 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486 for IP: 192.168.39.107
	I0831 23:07:36.457193   51160 certs.go:194] generating shared ca certs ...
	I0831 23:07:36.457213   51160 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:07:36.457363   51160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 23:07:36.457415   51160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 23:07:36.457429   51160 certs.go:256] generating profile certs ...
	I0831 23:07:36.457513   51160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/client.key
	I0831 23:07:36.457587   51160 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.key.ee1e7169
	I0831 23:07:36.457640   51160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.key
	I0831 23:07:36.457655   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:07:36.457674   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:07:36.457692   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:07:36.457711   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:07:36.457729   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 23:07:36.457749   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 23:07:36.457768   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 23:07:36.457786   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 23:07:36.457863   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 23:07:36.457904   51160 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 23:07:36.457918   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 23:07:36.457952   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:07:36.457984   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:07:36.458016   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 23:07:36.458068   51160 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:07:36.458125   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem -> /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.458146   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.458165   51160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.458741   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:07:36.483460   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:07:36.507473   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:07:36.531115   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:07:36.554215   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0831 23:07:36.577936   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:07:36.601696   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:07:36.625372   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/multinode-328486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:07:36.649389   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 23:07:36.672751   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 23:07:36.697764   51160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:07:36.722951   51160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 23:07:36.740566   51160 ssh_runner.go:195] Run: openssl version
	I0831 23:07:36.746507   51160 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0831 23:07:36.746591   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 23:07:36.757642   51160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.762348   51160 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.762380   51160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.762418   51160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 23:07:36.768154   51160 command_runner.go:130] > 51391683
	I0831 23:07:36.768296   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 23:07:36.777745   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 23:07:36.788550   51160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.793014   51160 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.793042   51160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.793074   51160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 23:07:36.798824   51160 command_runner.go:130] > 3ec20f2e
	I0831 23:07:36.798870   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:07:36.807855   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:07:36.818102   51160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.822584   51160 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.822605   51160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.822639   51160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:07:36.827953   51160 command_runner.go:130] > b5213941
	I0831 23:07:36.828064   51160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:07:36.837404   51160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:07:36.841856   51160 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:07:36.841881   51160 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0831 23:07:36.841889   51160 command_runner.go:130] > Device: 253,1	Inode: 2103318     Links: 1
	I0831 23:07:36.841897   51160 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0831 23:07:36.841907   51160 command_runner.go:130] > Access: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841918   51160 command_runner.go:130] > Modify: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841926   51160 command_runner.go:130] > Change: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841934   51160 command_runner.go:130] >  Birth: 2024-08-31 23:00:46.224983019 +0000
	I0831 23:07:36.841991   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:07:36.847679   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.847740   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:07:36.853358   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.853412   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:07:36.858976   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.859136   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:07:36.864658   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.864822   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:07:36.870609   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.870686   51160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:07:36.876438   51160 command_runner.go:130] > Certificate will not expire
	I0831 23:07:36.876505   51160 kubeadm.go:392] StartCluster: {Name:multinode-328486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-328486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:07:36.876609   51160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 23:07:36.876664   51160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 23:07:36.916517   51160 command_runner.go:130] > 1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391
	I0831 23:07:36.916546   51160 command_runner.go:130] > 59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b
	I0831 23:07:36.916557   51160 command_runner.go:130] > b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999
	I0831 23:07:36.916564   51160 command_runner.go:130] > a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060
	I0831 23:07:36.916569   51160 command_runner.go:130] > 4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90
	I0831 23:07:36.916576   51160 command_runner.go:130] > 980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd
	I0831 23:07:36.916581   51160 command_runner.go:130] > 4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1
	I0831 23:07:36.916598   51160 command_runner.go:130] > 4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5
	I0831 23:07:36.916617   51160 cri.go:89] found id: "1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391"
	I0831 23:07:36.916624   51160 cri.go:89] found id: "59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b"
	I0831 23:07:36.916627   51160 cri.go:89] found id: "b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999"
	I0831 23:07:36.916630   51160 cri.go:89] found id: "a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060"
	I0831 23:07:36.916633   51160 cri.go:89] found id: "4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90"
	I0831 23:07:36.916637   51160 cri.go:89] found id: "980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd"
	I0831 23:07:36.916639   51160 cri.go:89] found id: "4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1"
	I0831 23:07:36.916642   51160 cri.go:89] found id: "4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5"
	I0831 23:07:36.916645   51160 cri.go:89] found id: ""
	I0831 23:07:36.916685   51160 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.386533113Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342,Verbose:false,}" file="otel-collector/interceptors.go:62" id=24db3183-e344-4126-9ab5-a3691b1b3427 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.386682435Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145664305929696,StartedAt:1725145664350894501,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240813-c6f155d6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d8568fb2-7a88-4241-8bf2-501a06c4132a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d8568fb2-7a88-4241-8bf2-501a06c4132a/containers/kindnet-cni/c511d0dc,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath
:/etc/cni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d8568fb2-7a88-4241-8bf2-501a06c4132a/volumes/kubernetes.io~projected/kube-api-access-8w7cd,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-db4rl_d8568fb2-7a88-4241-8bf2-501a06c4132a/kindnet-cni/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=24db3183-e344-4126-9ab5-a3691b
1b3427 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.387243777Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=cf9c5483-8e0c-4a47-8423-11cc3bede2a6 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.387352556Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145664171207036,StartedAt:1725145664207077532,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/8d1277f4-c23e-4aea-a068-cd1ba2f5df16/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8d1277f4-c23e-4aea-a068-cd1ba2f5df16/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8d1277f4-c23e-4aea-a068-cd1ba2f5df16/containers/coredns/7f9f3ae1,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8d1277f4-c23e-4aea-a068-cd1ba2f5df16/volumes/kubernetes.io~projected/kube-api-access-nf79p,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-6f6b679f8f-qc6xv_8d1277f4-c23e-4aea-a068-cd1ba2f5df16/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cf9c5483-8e0c-4a47-8423-11cc3bede2a6 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.387539817Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=f6130059-bb37-4a6f-b0d4-dcced79edf89 name=/runtime.v1.RuntimeService/Status
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.387599350Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f6130059-bb37-4a6f-b0d4-dcced79edf89 name=/runtime.v1.RuntimeService/Status
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.388027373Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:766cdf89d49cc3ce1d99f4804e91e565591ad577dd431c646112797f22fb0273,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0300771b-c1bb-430f-a335-152c36b41947 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.388119324Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:766cdf89d49cc3ce1d99f4804e91e565591ad577dd431c646112797f22fb0273,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145663952468172,StartedAt:1725145664014277716,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8ff8d204-a6c4-4003-8a05-780d37fe2a6d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8ff8d204-a6c4-4003-8a05-780d37fe2a6d/containers/storage-provisioner/fabb5823,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8ff8d204-a6c4-4003-8a05-780d37fe2a6d/volumes/kubernetes.io~projected/kube-api-access-2lz57,Readonly:true,SelinuxRelabel:fals
e,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_8ff8d204-a6c4-4003-8a05-780d37fe2a6d/storage-provisioner/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0300771b-c1bb-430f-a335-152c36b41947 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.388801468Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=809bff03-95a5-4eb4-813b-f005e60aa393 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.388914995Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145663910820393,StartedAt:1725145663953824127,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1/containers/kube-proxy/7c08562c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/l
ib/kubelet/pods/d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1/volumes/kubernetes.io~projected/kube-api-access-2gs5z,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-d26wn_d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-co
llector/interceptors.go:74" id=809bff03-95a5-4eb4-813b-f005e60aa393 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.391829579Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604,Verbose:false,}" file="otel-collector/interceptors.go:62" id=72e9504e-5e99-4570-b919-3d8d8299c27e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.391946799Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145659168278900,StartedAt:1725145659256027306,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6627a1571b503abff6d9495763e77905/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6627a1571b503abff6d9495763e77905/containers/etcd/75f9a5e7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-m
ultinode-328486_6627a1571b503abff6d9495763e77905/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=72e9504e-5e99-4570-b919-3d8d8299c27e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.393098892Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d50292aa-54a7-463b-b4aa-bb6176eebd1e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.393190691Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145659125301043,StartedAt:1725145659196989114,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5ad8881fe2e92036f6465ab61a86a5c2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5ad8881fe2e92036f6465ab61a86a5c2/containers/kube-scheduler/a484d273,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-multinode-328486_5ad8881fe2e92036f6465ab61a86a5c2/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeri
od:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d50292aa-54a7-463b-b4aa-bb6176eebd1e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.393508617Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a1dbda60-d425-4145-9f18-104c482eec39 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.393596533Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145659033356156,StartedAt:1725145659107981122,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5a5a6c8b0cd156f96b3dc0eb23911d2e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5a5a6c8b0cd156f96b3dc0eb23911d2e/containers/kube-apiserver/4057de41,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Containe
rPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-multinode-328486_5a5a6c8b0cd156f96b3dc0eb23911d2e/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a1dbda60-d425-4145-9f18-104c482eec39 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.394590014Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5,Verbose:false,}" file="otel-collector/interceptors.go:62" id=00cd7ecc-e326-4c73-a23c-302cf74bc481 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.394694445Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1725145658952579490,StartedAt:1725145659048482734,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/33735aba1476d2b79d46054c0907f94e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/33735aba1476d2b79d46054c0907f94e/containers/kube-controller-manager/a7bf60c9,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,
UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-multinode-328486_33735aba1476d2b79d46054c0907f94e/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMem
s:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=00cd7ecc-e326-4c73-a23c-302cf74bc481 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.433105615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d792d11-393c-4a47-bd1d-db77058b509c name=/runtime.v1.RuntimeService/Version
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.433179326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d792d11-393c-4a47-bd1d-db77058b509c name=/runtime.v1.RuntimeService/Version
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.434536939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=812a09d9-532d-4ad2-82ad-eb0182307c4c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.434961015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145908434938373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=812a09d9-532d-4ad2-82ad-eb0182307c4c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.435686132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3126536-04c9-4f9f-aee9-09933a203932 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.435744886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3126536-04c9-4f9f-aee9-09933a203932 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:11:48 multinode-328486 crio[2728]: time="2024-08-31 23:11:48.436102744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4b9117718867762d4ec1613ed7322c4abfa72005cb92d8018b922282a80d85,PodSandboxId:6354e5a895ed6d06456ec5b16d6c824cc23bd897c9907aaaa43a4d334272654c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725145697525584542,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342,PodSandboxId:e2197d9bd51fe0d63d4cf8c7d95b6bb41789d6b6ddea7eb358cf6448fb27cdbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725145664094961658,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b,PodSandboxId:f53b33402e0d6fe1c895ae9b722f85501dac8408a5b2b0f12d69f790d0179922,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725145664071926088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766cdf89d49cc3ce1d99f4804e91e565591ad577dd431c646112797f22fb0273,PodSandboxId:890e773dbbc410c322dc4a348e1d9da6a851372cd12a334f86770588e560c82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725145663882100507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a,PodSandboxId:f4b8369314621c963ce45db20de7698fbffc84d546a2d63e9909759e07f64af6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725145663823747300,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604,PodSandboxId:8ba69a9980cb0a5009471a4c2b5b1bf64b9c7de6caa2af0de4d2756c6c5e179f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725145659022601896,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97,PodSandboxId:adc69baecb81c23207a5d36a86e07db3152d2a3378967f5a97f5b99f749d0c9e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725145658950613576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2,PodSandboxId:5a7b0aa2a1c6c40fe4c2485c32c0b4a7a39d5729b7b40d809195a111e186ebdc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725145658969911445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5,PodSandboxId:bfc9ffecf4faf522c4d40e1c84e573c9175eb61f6ab76e1274333722a90e9709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725145658865941512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ea00636a8abd5658da3a89d2e52a578b618823bd0951c06274aee040f9fbc93,PodSandboxId:211ac5f7cdc1e037699bd87c354f6495083806abd09c529554b01ec871df2ff2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725145334349135237,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d8fm4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d63e2892-4f48-47e0-af7a-f7ef96a818f0,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391,PodSandboxId:baeb7772fb632d5aa1b822df12d0f7e75ede183a7f4fff14145e0adb86b98348,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725145276679568324,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qc6xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d1277f4-c23e-4aea-a068-cd1ba2f5df16,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e76f99b3b4b7c3c40a826f3df7d5bf1164495da0fb69001c238e528bcece6b,PodSandboxId:0d9ed579185612e622f9270e1911fd5d4c4bea5592416bb7e67870dadabf59cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725145276616262398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8ff8d204-a6c4-4003-8a05-780d37fe2a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999,PodSandboxId:53bc0437f5619a15de3776b9590136135893e16b37d3f950bc35c853e61cb4c2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725145264916205977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-db4rl,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d8568fb2-7a88-4241-8bf2-501a06c4132a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060,PodSandboxId:a194e9b69b97299473ef3967cdc39580a099f2a07408d8f91035e907cd75998e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725145261144659263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d26wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d557c77f-bb71-4ca8-a8cb-d1a5ade56cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90,PodSandboxId:70a70b890a48e99f30786b2676e6887ad512fc969979cf899136fb90216d16cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725145250476776281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
d8881fe2e92036f6465ab61a86a5c2,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd,PodSandboxId:27f9441cd679a6d36b2eff027de3386b40f6bf1132a69a617b8d315c3b0b21e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725145250428368437,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6627a1571b503abff6d9495763e77905,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1,PodSandboxId:0ec8ad0e93908d9e917cab57362321f9e326c62ac8362baa80ac19c8b67869b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725145250373951807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5a6c8b0cd156f96b3dc0eb23911d2e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5,PodSandboxId:17b67a34f259d067a9766198f97d0dd67f2b8ac190f5267941dca8f4b5910780,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725145250333252755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33735aba1476d2b79d46054c0907f94e,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3126536-04c9-4f9f-aee9-09933a203932 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e4b911771886       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6354e5a895ed6       busybox-7dff88458-d8fm4
	be5242d84e14b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   e2197d9bd51fe       kindnet-db4rl
	4e4eec6d5cd86       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   f53b33402e0d6       coredns-6f6b679f8f-qc6xv
	766cdf89d49cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   890e773dbbc41       storage-provisioner
	af182014be0d2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   f4b8369314621       kube-proxy-d26wn
	3457ed73a33a3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   8ba69a9980cb0       etcd-multinode-328486
	355c302ecd1e4       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   5a7b0aa2a1c6c       kube-apiserver-multinode-328486
	292216e837e65       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   adc69baecb81c       kube-scheduler-multinode-328486
	86c26f347c5db       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   bfc9ffecf4faf       kube-controller-manager-multinode-328486
	5ea00636a8abd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   211ac5f7cdc1e       busybox-7dff88458-d8fm4
	1854f60b239ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   baeb7772fb632       coredns-6f6b679f8f-qc6xv
	59e76f99b3b4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   0d9ed57918561       storage-provisioner
	b02f023f5ad89       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   53bc0437f5619       kindnet-db4rl
	a30878aa9b46b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   a194e9b69b972       kube-proxy-d26wn
	4ba23ea858780       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   70a70b890a48e       kube-scheduler-multinode-328486
	980e8b26efbbf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   27f9441cd679a       etcd-multinode-328486
	4761d2795a972       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   0ec8ad0e93908       kube-apiserver-multinode-328486
	4812c3914931d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   17b67a34f259d       kube-controller-manager-multinode-328486
	
	
	==> coredns [1854f60b239ec135cd5ccba1e5f4256f1a741f5a0b9a8dfd629201daf1066391] <==
	[INFO] 10.244.1.2:42525 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001745588s
	[INFO] 10.244.1.2:35977 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138438s
	[INFO] 10.244.1.2:42908 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104633s
	[INFO] 10.244.1.2:47781 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001204934s
	[INFO] 10.244.1.2:54698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065189s
	[INFO] 10.244.1.2:46331 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069999s
	[INFO] 10.244.1.2:39274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074391s
	[INFO] 10.244.0.3:55327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225689s
	[INFO] 10.244.0.3:57039 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048526s
	[INFO] 10.244.0.3:44547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045341s
	[INFO] 10.244.0.3:58409 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036559s
	[INFO] 10.244.1.2:38226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120244s
	[INFO] 10.244.1.2:43580 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124347s
	[INFO] 10.244.1.2:50032 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085135s
	[INFO] 10.244.1.2:57073 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077512s
	[INFO] 10.244.0.3:49426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107976s
	[INFO] 10.244.0.3:50100 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101425s
	[INFO] 10.244.0.3:57202 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009954s
	[INFO] 10.244.0.3:52483 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155515s
	[INFO] 10.244.1.2:56872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128138s
	[INFO] 10.244.1.2:60473 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105079s
	[INFO] 10.244.1.2:36850 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113804s
	[INFO] 10.244.1.2:50146 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083122s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e4eec6d5cd860e77aa37fcb428ab0c0372d108da1a1237eb34dd4933fb58f3b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52559 - 46687 "HINFO IN 8178519279946959702.1478944245455869154. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014545178s
	
	
	==> describe nodes <==
	Name:               multinode-328486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=multinode-328486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T23_00_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 23:00:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-328486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:11:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:00:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:00:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:00:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:07:42 +0000   Sat, 31 Aug 2024 23:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-328486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b85ddfecfee143998777e9211191b0e8
	  System UUID:                b85ddfec-fee1-4399-8777-e9211191b0e8
	  Boot ID:                    80cd42b0-9834-46da-9c3b-79f201f788b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d8fm4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kube-system                 coredns-6f6b679f8f-qc6xv                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-328486                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-db4rl                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-328486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-328486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-d26wn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-328486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-328486 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-328486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-328486 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-328486 event: Registered Node multinode-328486 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-328486 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-328486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-328486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-328486 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node multinode-328486 event: Registered Node multinode-328486 in Controller
	
	
	Name:               multinode-328486-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328486-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=multinode-328486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T23_08_24_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 23:08:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-328486-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:09:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:10:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:10:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:10:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 31 Aug 2024 23:08:54 +0000   Sat, 31 Aug 2024 23:10:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    multinode-328486-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b9b162ba7134f7f94c043db2101b498
	  System UUID:                6b9b162b-a713-4f7f-94c0-43db2101b498
	  Boot ID:                    952cd52a-e7a5-40bc-997c-ebc4b1a4d144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t729k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kindnet-zh78t              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-qp4jf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-328486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-328486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-328486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m40s                  kubelet          Node multinode-328486-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-328486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-328486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-328486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-328486-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-328486-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057459] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.171044] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.148491] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.276772] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.977559] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.413488] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.066783] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989732] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.075167] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.160953] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.132683] kauditd_printk_skb: 21 callbacks suppressed
	[Aug31 23:01] kauditd_printk_skb: 56 callbacks suppressed
	[Aug31 23:02] kauditd_printk_skb: 14 callbacks suppressed
	[Aug31 23:07] systemd-fstab-generator[2652]: Ignoring "noauto" option for root device
	[  +0.153386] systemd-fstab-generator[2665]: Ignoring "noauto" option for root device
	[  +0.162678] systemd-fstab-generator[2679]: Ignoring "noauto" option for root device
	[  +0.151813] systemd-fstab-generator[2691]: Ignoring "noauto" option for root device
	[  +0.275954] systemd-fstab-generator[2719]: Ignoring "noauto" option for root device
	[  +1.661512] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +1.644108] systemd-fstab-generator[2933]: Ignoring "noauto" option for root device
	[  +1.058343] kauditd_printk_skb: 169 callbacks suppressed
	[  +5.139641] kauditd_printk_skb: 35 callbacks suppressed
	[ +14.872782] systemd-fstab-generator[3778]: Ignoring "noauto" option for root device
	[  +0.100182] kauditd_printk_skb: 4 callbacks suppressed
	[Aug31 23:08] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3457ed73a33a381ca89570f1b4a1f54b1da2befee83c4130020063bc2d2a3604] <==
	{"level":"info","ts":"2024-08-31T23:07:39.422800Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:07:39.422881Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:07:39.449031Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:07:39.462196Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-31T23:07:39.464759Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T23:07:39.464513Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:07:39.467847Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:07:39.465749Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T23:07:41.270007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-31T23:07:41.270065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-31T23:07:41.270120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-08-31T23:07:41.270135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.270141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.270161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.270173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-08-31T23:07:41.276723Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:multinode-328486 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T23:07:41.276772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:07:41.276960Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T23:07:41.277008Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T23:07:41.277043Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:07:41.278074Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:07:41.278075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:07:41.278983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2024-08-31T23:07:41.279318Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T23:08:27.788733Z","caller":"traceutil/trace.go:171","msg":"trace[871104458] transaction","detail":"{read_only:false; response_revision:1043; number_of_response:1; }","duration":"121.159836ms","start":"2024-08-31T23:08:27.667547Z","end":"2024-08-31T23:08:27.788707Z","steps":["trace[871104458] 'process raft request'  (duration: 121.034646ms)"],"step_count":1}
	
	
	==> etcd [980e8b26efbbf49fd516f2f6cf58ddc7b1c55e40ac8496646c4c2ee1e23d5bdd] <==
	{"level":"warn","ts":"2024-08-31T23:02:48.323434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.438424ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3701556105952621491 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-vvmtx\" mod_revision:584 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-31T23:02:48.323981Z","caller":"traceutil/trace.go:171","msg":"trace[1170768638] linearizableReadLoop","detail":"{readStateIndex:618; appliedIndex:617; }","duration":"505.298928ms","start":"2024-08-31T23:02:47.818660Z","end":"2024-08-31T23:02:48.323959Z","steps":["trace[1170768638] 'read index received'  (duration: 303.875598ms)","trace[1170768638] 'applied index is now lower than readState.Index'  (duration: 201.422232ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T23:02:48.324159Z","caller":"traceutil/trace.go:171","msg":"trace[2056374227] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"508.792732ms","start":"2024-08-31T23:02:47.815356Z","end":"2024-08-31T23:02:48.324148Z","steps":["trace[2056374227] 'process raft request'  (duration: 307.237928ms)","trace[2056374227] 'compare'  (duration: 200.309855ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-31T23:02:48.324263Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:02:47.815307Z","time spent":"508.911592ms","remote":"127.0.0.1:56144","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1322,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-vvmtx\" mod_revision:584 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-vvmtx\" > >"}
	{"level":"warn","ts":"2024-08-31T23:02:48.324499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"505.83308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.324550Z","caller":"traceutil/trace.go:171","msg":"trace[1572202024] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:585; }","duration":"505.885129ms","start":"2024-08-31T23:02:47.818656Z","end":"2024-08-31T23:02:48.324541Z","steps":["trace[1572202024] 'agreement among raft nodes before linearized reading'  (duration: 505.812142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:02:48.324595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:02:47.818625Z","time spent":"505.963476ms","remote":"127.0.0.1:56144","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":28,"request content":"key:\"/registry/certificatesigningrequests\" limit:1 "}
	{"level":"warn","ts":"2024-08-31T23:02:48.324714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"468.516104ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.324749Z","caller":"traceutil/trace.go:171","msg":"trace[1770615465] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:585; }","duration":"468.55095ms","start":"2024-08-31T23:02:47.856192Z","end":"2024-08-31T23:02:48.324743Z","steps":["trace[1770615465] 'agreement among raft nodes before linearized reading'  (duration: 468.507859ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:02:48.325187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.707646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.325278Z","caller":"traceutil/trace.go:171","msg":"trace[635815810] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:585; }","duration":"160.800526ms","start":"2024-08-31T23:02:48.164470Z","end":"2024-08-31T23:02:48.325271Z","steps":["trace[635815810] 'agreement among raft nodes before linearized reading'  (duration: 160.694228ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:02:48.636336Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.790298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-328486-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:02:48.636506Z","caller":"traceutil/trace.go:171","msg":"trace[449095893] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-328486-m03; range_end:; response_count:0; response_revision:586; }","duration":"206.974516ms","start":"2024-08-31T23:02:48.429520Z","end":"2024-08-31T23:02:48.636494Z","steps":["trace[449095893] 'range keys from in-memory index tree'  (duration: 206.737452ms)"],"step_count":1}
	2024/08/31 23:02:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-31T23:03:41.718977Z","caller":"traceutil/trace.go:171","msg":"trace[1014906409] transaction","detail":"{read_only:false; response_revision:710; number_of_response:1; }","duration":"109.758374ms","start":"2024-08-31T23:03:41.609199Z","end":"2024-08-31T23:03:41.718957Z","steps":["trace[1014906409] 'process raft request'  (duration: 109.635287ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T23:06:02.653747Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-31T23:06:02.653909Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-328486","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	{"level":"warn","ts":"2024-08-31T23:06:02.654028Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T23:06:02.654119Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T23:06:02.743452Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T23:06:02.743508Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.107:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-31T23:06:02.743577Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ec1614c5c0f7335e","current-leader-member-id":"ec1614c5c0f7335e"}
	{"level":"info","ts":"2024-08-31T23:06:02.746362Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:06:02.746501Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-08-31T23:06:02.746509Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-328486","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"]}
	
	
	==> kernel <==
	 23:11:48 up 11 min,  0 users,  load average: 0.13, 0.27, 0.17
	Linux multinode-328486 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b02f023f5ad8914c133c72a3864de3b713d49a6accc8b4b90525f4e839bf4999] <==
	I0831 23:05:16.016356       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:26.015598       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:26.015727       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:26.015875       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:26.015910       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:05:26.015996       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:26.016018       1 main.go:299] handling current node
	I0831 23:05:36.015664       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:36.015732       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:36.015941       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:36.015948       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:05:36.016002       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:36.016038       1 main.go:299] handling current node
	I0831 23:05:46.016714       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:46.016777       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:46.016942       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:46.016967       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	I0831 23:05:46.017029       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:46.017051       1 main.go:299] handling current node
	I0831 23:05:56.016329       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:05:56.016529       1 main.go:299] handling current node
	I0831 23:05:56.016572       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:05:56.016592       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:56.016729       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0831 23:05:56.016769       1 main.go:322] Node multinode-328486-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [be5242d84e14b604790a0a55d186a1d64f4cecef92c2fdabdc91654d7a25b342] <==
	I0831 23:10:45.116477       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:10:55.124650       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:10:55.124774       1 main.go:299] handling current node
	I0831 23:10:55.124815       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:10:55.124842       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:11:05.124579       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:11:05.124625       1 main.go:299] handling current node
	I0831 23:11:05.124640       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:11:05.124646       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:11:15.115663       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:11:15.115784       1 main.go:299] handling current node
	I0831 23:11:15.115831       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:11:15.115858       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:11:25.122981       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:11:25.123079       1 main.go:299] handling current node
	I0831 23:11:25.123108       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:11:25.123126       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:11:35.121914       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:11:35.121967       1 main.go:299] handling current node
	I0831 23:11:35.122001       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:11:35.122009       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	I0831 23:11:45.116704       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0831 23:11:45.116812       1 main.go:299] handling current node
	I0831 23:11:45.116833       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0831 23:11:45.116838       1 main.go:322] Node multinode-328486-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [355c302ecd1e4616ebdb10eace47dabdce70dccab69f9b0d7909e32b7630ceb2] <==
	I0831 23:07:42.613076       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 23:07:42.613439       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 23:07:42.617578       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 23:07:42.625033       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 23:07:42.625146       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 23:07:42.625183       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 23:07:42.625518       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 23:07:42.625550       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 23:07:42.625629       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 23:07:42.625674       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 23:07:42.629597       1 aggregator.go:171] initial CRD sync complete...
	I0831 23:07:42.629640       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 23:07:42.629647       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 23:07:42.629653       1 cache.go:39] Caches are synced for autoregister controller
	I0831 23:07:42.650811       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:07:42.650860       1 policy_source.go:224] refreshing policies
	I0831 23:07:42.729909       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:07:43.517837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0831 23:07:44.880101       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0831 23:07:45.007163       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0831 23:07:45.026885       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0831 23:07:45.095942       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0831 23:07:45.107527       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0831 23:07:46.214691       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 23:07:46.264883       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4761d2795a9724f8f9513617392665461c24518da9664d0c99b70d821d5780e1] <==
	W0831 23:06:02.673329       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.675858       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.675946       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.675998       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676036       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676207       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676276       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676324       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676370       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676489       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676518       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676575       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676613       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676642       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676708       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676767       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676816       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676848       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676875       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676921       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676949       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.676996       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.677044       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.677085       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0831 23:06:02.677154       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4812c3914931d5a8936ff13c32923e0ed0a9ef49d66be5498dbb0d8ee1d279b5] <==
	I0831 23:03:35.925860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:35.926200       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:03:37.470229       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-328486-m03\" does not exist"
	I0831 23:03:37.470522       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:03:37.482811       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-328486-m03" podCIDRs=["10.244.3.0/24"]
	I0831 23:03:37.482862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:37.482886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:37.485720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:37.936938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:38.263935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:39.907859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:47.823343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:57.277224       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:03:57.277242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:57.290007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:03:59.908549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:39.929786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:39.929914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:04:39.946944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:44.963643       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:04:44.994170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:04:45.018281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.633606ms"
	I0831 23:04:45.020165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.846µs"
	I0831 23:04:45.033481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:04:55.108050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	
	
	==> kube-controller-manager [86c26f347c5dbc58a52956f41efd0891a34cde5b4456972d41a29c067aa3c0c5] <==
	I0831 23:09:02.346923       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-328486-m03" podCIDRs=["10.244.2.0/24"]
	I0831 23:09:02.348010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:02.348193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:02.355361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:02.756131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:03.090899       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:06.105185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:12.534980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:22.096786       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:09:22.097060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:22.114101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:26.063117       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:26.790853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:26.811220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:27.265151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m03"
	I0831 23:09:27.265792       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-328486-m02"
	I0831 23:10:06.078089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:10:06.095680       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:10:06.108602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.645482ms"
	I0831 23:10:06.108717       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="55.119µs"
	I0831 23:10:11.210871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-328486-m02"
	I0831 23:10:25.954470       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4phsq"
	I0831 23:10:25.981617       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4phsq"
	I0831 23:10:25.981723       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rvzrt"
	I0831 23:10:26.006912       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rvzrt"
	
	
	==> kube-proxy [a30878aa9b46bb9998432e916f0afd2450542b68a2d24bad60637c96ece9f060] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 23:01:01.691265       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 23:01:01.704580       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0831 23:01:01.704775       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 23:01:01.788894       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 23:01:01.788940       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 23:01:01.788967       1 server_linux.go:169] "Using iptables Proxier"
	I0831 23:01:01.791574       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 23:01:01.791854       1 server.go:483] "Version info" version="v1.31.0"
	I0831 23:01:01.791885       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:01:01.793744       1 config.go:197] "Starting service config controller"
	I0831 23:01:01.793770       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 23:01:01.793794       1 config.go:104] "Starting endpoint slice config controller"
	I0831 23:01:01.793798       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 23:01:01.794159       1 config.go:326] "Starting node config controller"
	I0831 23:01:01.794197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 23:01:01.894875       1 shared_informer.go:320] Caches are synced for node config
	I0831 23:01:01.894926       1 shared_informer.go:320] Caches are synced for service config
	I0831 23:01:01.894966       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [af182014be0d2d9b3e32e4fdf12196d90898159baada1d779b7cdf3234a4e68a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 23:07:44.216666       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 23:07:44.231933       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0831 23:07:44.232011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 23:07:44.303942       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 23:07:44.303988       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 23:07:44.304018       1 server_linux.go:169] "Using iptables Proxier"
	I0831 23:07:44.321062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 23:07:44.321313       1 server.go:483] "Version info" version="v1.31.0"
	I0831 23:07:44.321323       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:07:44.323228       1 config.go:197] "Starting service config controller"
	I0831 23:07:44.323323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 23:07:44.323592       1 config.go:326] "Starting node config controller"
	I0831 23:07:44.323620       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 23:07:44.323885       1 config.go:104] "Starting endpoint slice config controller"
	I0831 23:07:44.323916       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 23:07:44.424464       1 shared_informer.go:320] Caches are synced for node config
	I0831 23:07:44.424516       1 shared_informer.go:320] Caches are synced for service config
	I0831 23:07:44.425518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [292216e837e65faa0759880c9606e18e126c5de39ab07e79c049347913a6ee97] <==
	I0831 23:07:40.188290       1 serving.go:386] Generated self-signed cert in-memory
	W0831 23:07:42.602871       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0831 23:07:42.602914       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 23:07:42.602924       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0831 23:07:42.602932       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0831 23:07:42.643737       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0831 23:07:42.643789       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:07:42.646133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0831 23:07:42.646205       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0831 23:07:42.646230       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0831 23:07:42.646342       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 23:07:42.746978       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4ba23ea85878022594d5edc07d74638ea675282a2f1b613a3cd9593355a2ff90] <==
	E0831 23:00:53.289947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.289980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:53.290036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:53.290110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290258       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 23:00:53.291205       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 23:00:53.290523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 23:00:53.291340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 23:00:53.291490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 23:00:53.291548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 23:00:53.291619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.290728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:53.291671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:53.291001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 23:00:53.291732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:54.119609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:54.119860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:54.164268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 23:00:54.164672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 23:00:54.878647       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0831 23:06:02.647125       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 31 23:10:38 multinode-328486 kubelet[2940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 23:10:38 multinode-328486 kubelet[2940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 23:10:38 multinode-328486 kubelet[2940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 23:10:38 multinode-328486 kubelet[2940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 23:10:38 multinode-328486 kubelet[2940]: E0831 23:10:38.368330    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145838368078838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:10:38 multinode-328486 kubelet[2940]: E0831 23:10:38.368523    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145838368078838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:10:48 multinode-328486 kubelet[2940]: E0831 23:10:48.370777    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145848369789406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:10:48 multinode-328486 kubelet[2940]: E0831 23:10:48.371337    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145848369789406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:10:58 multinode-328486 kubelet[2940]: E0831 23:10:58.373172    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145858372742843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:10:58 multinode-328486 kubelet[2940]: E0831 23:10:58.373215    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145858372742843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:08 multinode-328486 kubelet[2940]: E0831 23:11:08.375565    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145868374651492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:08 multinode-328486 kubelet[2940]: E0831 23:11:08.375775    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145868374651492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:18 multinode-328486 kubelet[2940]: E0831 23:11:18.379736    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145878379086030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:18 multinode-328486 kubelet[2940]: E0831 23:11:18.379790    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145878379086030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:28 multinode-328486 kubelet[2940]: E0831 23:11:28.381468    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145888380728044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:28 multinode-328486 kubelet[2940]: E0831 23:11:28.381879    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145888380728044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:38 multinode-328486 kubelet[2940]: E0831 23:11:38.287134    2940 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 23:11:38 multinode-328486 kubelet[2940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 23:11:38 multinode-328486 kubelet[2940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 23:11:38 multinode-328486 kubelet[2940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 23:11:38 multinode-328486 kubelet[2940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 23:11:38 multinode-328486 kubelet[2940]: E0831 23:11:38.384112    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145898383706256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:38 multinode-328486 kubelet[2940]: E0831 23:11:38.384208    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145898383706256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:48 multinode-328486 kubelet[2940]: E0831 23:11:48.385610    2940 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145908385051615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:11:48 multinode-328486 kubelet[2940]: E0831 23:11:48.385648    2940 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145908385051615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:11:48.011798   53117 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18943-13149/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-328486 -n multinode-328486
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-328486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:286: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                    
x
+
TestPreload (280.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-135302 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0831 23:16:42.547507   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-135302 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m17.078875335s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-135302 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-135302 image pull gcr.io/k8s-minikube/busybox: (3.249022804s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-135302
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-135302: exit status 82 (2m0.456118929s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-135302"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-135302 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-31 23:19:57.033498029 +0000 UTC m=+4447.859434271
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-135302 -n test-preload-135302
E0831 23:19:59.874917   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-135302 -n test-preload-135302: exit status 3 (18.557398229s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:20:15.587700   56425 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.153:22: connect: no route to host
	E0831 23:20:15.587723   56425 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.153:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "test-preload-135302" host is not running, skipping log retrieval (state="Error")
helpers_test.go:176: Cleaning up "test-preload-135302" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-135302
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-135302: (1.117880951s)
--- FAIL: TestPreload (280.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (363.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m4.635453507s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-828713] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-828713" primary control-plane node in "kubernetes-upgrade-828713" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:23:23.534923   60556 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:23:23.535016   60556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:23:23.535024   60556 out.go:358] Setting ErrFile to fd 2...
	I0831 23:23:23.535027   60556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:23:23.535227   60556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:23:23.535810   60556 out.go:352] Setting JSON to false
	I0831 23:23:23.536770   60556 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7550,"bootTime":1725139053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 23:23:23.536824   60556 start.go:139] virtualization: kvm guest
	I0831 23:23:23.538587   60556 out.go:177] * [kubernetes-upgrade-828713] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 23:23:23.540434   60556 notify.go:220] Checking for updates...
	I0831 23:23:23.540468   60556 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:23:23.541592   60556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:23:23.542958   60556 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:23:23.544512   60556 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:23:23.545764   60556 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 23:23:23.547011   60556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:23:23.549057   60556 config.go:182] Loaded profile config "NoKubernetes-711704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:23:23.549218   60556 config.go:182] Loaded profile config "offline-crio-651504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:23:23.549335   60556 config.go:182] Loaded profile config "running-upgrade-741050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0831 23:23:23.549442   60556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:23:23.584057   60556 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 23:23:23.585814   60556 start.go:297] selected driver: kvm2
	I0831 23:23:23.585830   60556 start.go:901] validating driver "kvm2" against <nil>
	I0831 23:23:23.585840   60556 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:23:23.586520   60556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:23:23.586585   60556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 23:23:23.601655   60556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 23:23:23.601694   60556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 23:23:23.601900   60556 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 23:23:23.601966   60556 cni.go:84] Creating CNI manager for ""
	I0831 23:23:23.601979   60556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 23:23:23.601988   60556 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 23:23:23.602060   60556 start.go:340] cluster config:
	{Name:kubernetes-upgrade-828713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-828713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:23:23.602164   60556 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:23:23.603700   60556 out.go:177] * Starting "kubernetes-upgrade-828713" primary control-plane node in "kubernetes-upgrade-828713" cluster
	I0831 23:23:23.605151   60556 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 23:23:23.605202   60556 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0831 23:23:23.605211   60556 cache.go:56] Caching tarball of preloaded images
	I0831 23:23:23.605302   60556 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 23:23:23.605329   60556 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0831 23:23:23.605412   60556 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/config.json ...
	I0831 23:23:23.605429   60556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/config.json: {Name:mkb273ff1c980ec6a163d22fa8f80cc1bf4f327b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:23:23.605579   60556 start.go:360] acquireMachinesLock for kubernetes-upgrade-828713: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 23:23:57.739650   60556 start.go:364] duration metric: took 34.134045359s to acquireMachinesLock for "kubernetes-upgrade-828713"
	I0831 23:23:57.739716   60556 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-828713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-828713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:23:57.739862   60556 start.go:125] createHost starting for "" (driver="kvm2")
	I0831 23:23:57.743141   60556 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 23:23:57.743386   60556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:23:57.743444   60556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:23:57.759169   60556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I0831 23:23:57.759547   60556 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:23:57.760140   60556 main.go:141] libmachine: Using API Version  1
	I0831 23:23:57.760164   60556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:23:57.760504   60556 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:23:57.760699   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetMachineName
	I0831 23:23:57.760878   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:23:57.761049   60556 start.go:159] libmachine.API.Create for "kubernetes-upgrade-828713" (driver="kvm2")
	I0831 23:23:57.761085   60556 client.go:168] LocalClient.Create starting
	I0831 23:23:57.761126   60556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem
	I0831 23:23:57.761164   60556 main.go:141] libmachine: Decoding PEM data...
	I0831 23:23:57.761192   60556 main.go:141] libmachine: Parsing certificate...
	I0831 23:23:57.761256   60556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem
	I0831 23:23:57.761283   60556 main.go:141] libmachine: Decoding PEM data...
	I0831 23:23:57.761302   60556 main.go:141] libmachine: Parsing certificate...
	I0831 23:23:57.761324   60556 main.go:141] libmachine: Running pre-create checks...
	I0831 23:23:57.761341   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .PreCreateCheck
	I0831 23:23:57.761675   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetConfigRaw
	I0831 23:23:57.762110   60556 main.go:141] libmachine: Creating machine...
	I0831 23:23:57.762125   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .Create
	I0831 23:23:57.762254   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Creating KVM machine...
	I0831 23:23:57.763361   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found existing default KVM network
	I0831 23:23:57.764490   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:57.764340   61134 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2b:c3:bb} reservation:<nil>}
	I0831 23:23:57.765517   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:57.765455   61134 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002645c0}
	I0831 23:23:57.765554   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | created network xml: 
	I0831 23:23:57.765575   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | <network>
	I0831 23:23:57.765606   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |   <name>mk-kubernetes-upgrade-828713</name>
	I0831 23:23:57.765620   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |   <dns enable='no'/>
	I0831 23:23:57.765630   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |   
	I0831 23:23:57.765639   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0831 23:23:57.765660   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |     <dhcp>
	I0831 23:23:57.765670   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0831 23:23:57.765679   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |     </dhcp>
	I0831 23:23:57.765687   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |   </ip>
	I0831 23:23:57.765696   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG |   
	I0831 23:23:57.765704   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | </network>
	I0831 23:23:57.765718   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | 
	I0831 23:23:57.771029   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | trying to create private KVM network mk-kubernetes-upgrade-828713 192.168.50.0/24...
	I0831 23:23:57.839423   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | private KVM network mk-kubernetes-upgrade-828713 192.168.50.0/24 created
	I0831 23:23:57.839464   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Setting up store path in /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713 ...
	I0831 23:23:57.839480   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Building disk image from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 23:23:57.839637   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:57.839394   61134 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:23:57.839790   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Downloading /home/jenkins/minikube-integration/18943-13149/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso...
	I0831 23:23:58.080299   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:58.080178   61134 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa...
	I0831 23:23:58.251721   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:58.251581   61134 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/kubernetes-upgrade-828713.rawdisk...
	I0831 23:23:58.251758   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Writing magic tar header
	I0831 23:23:58.251778   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Writing SSH key tar header
	I0831 23:23:58.251792   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:58.251727   61134 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713 ...
	I0831 23:23:58.251878   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713
	I0831 23:23:58.251907   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube/machines
	I0831 23:23:58.251938   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713 (perms=drwx------)
	I0831 23:23:58.251954   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:23:58.251970   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18943-13149
	I0831 23:23:58.251989   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube/machines (perms=drwxr-xr-x)
	I0831 23:23:58.252005   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0831 23:23:58.252020   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149/.minikube (perms=drwxr-xr-x)
	I0831 23:23:58.252036   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Setting executable bit set on /home/jenkins/minikube-integration/18943-13149 (perms=drwxrwxr-x)
	I0831 23:23:58.252047   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0831 23:23:58.252055   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Checking permissions on dir: /home/jenkins
	I0831 23:23:58.252063   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0831 23:23:58.252069   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Checking permissions on dir: /home
	I0831 23:23:58.252095   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Skipping /home - not owner
	I0831 23:23:58.252103   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Creating domain...
	I0831 23:23:58.253072   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) define libvirt domain using xml: 
	I0831 23:23:58.253095   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) <domain type='kvm'>
	I0831 23:23:58.253104   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   <name>kubernetes-upgrade-828713</name>
	I0831 23:23:58.253109   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   <memory unit='MiB'>2200</memory>
	I0831 23:23:58.253118   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   <vcpu>2</vcpu>
	I0831 23:23:58.253134   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   <features>
	I0831 23:23:58.253153   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <acpi/>
	I0831 23:23:58.253169   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <apic/>
	I0831 23:23:58.253196   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <pae/>
	I0831 23:23:58.253217   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     
	I0831 23:23:58.253230   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   </features>
	I0831 23:23:58.253240   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   <cpu mode='host-passthrough'>
	I0831 23:23:58.253246   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   
	I0831 23:23:58.253257   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   </cpu>
	I0831 23:23:58.253265   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   <os>
	I0831 23:23:58.253270   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <type>hvm</type>
	I0831 23:23:58.253285   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <boot dev='cdrom'/>
	I0831 23:23:58.253301   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <boot dev='hd'/>
	I0831 23:23:58.253314   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <bootmenu enable='no'/>
	I0831 23:23:58.253324   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   </os>
	I0831 23:23:58.253332   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   <devices>
	I0831 23:23:58.253346   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <disk type='file' device='cdrom'>
	I0831 23:23:58.253359   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/boot2docker.iso'/>
	I0831 23:23:58.253371   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <target dev='hdc' bus='scsi'/>
	I0831 23:23:58.253382   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <readonly/>
	I0831 23:23:58.253394   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     </disk>
	I0831 23:23:58.253406   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <disk type='file' device='disk'>
	I0831 23:23:58.253418   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0831 23:23:58.253432   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <source file='/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/kubernetes-upgrade-828713.rawdisk'/>
	I0831 23:23:58.253444   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <target dev='hda' bus='virtio'/>
	I0831 23:23:58.253460   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     </disk>
	I0831 23:23:58.253470   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <interface type='network'>
	I0831 23:23:58.253480   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <source network='mk-kubernetes-upgrade-828713'/>
	I0831 23:23:58.253489   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <model type='virtio'/>
	I0831 23:23:58.253496   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     </interface>
	I0831 23:23:58.253505   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <interface type='network'>
	I0831 23:23:58.253513   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <source network='default'/>
	I0831 23:23:58.253523   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <model type='virtio'/>
	I0831 23:23:58.253537   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     </interface>
	I0831 23:23:58.253550   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <serial type='pty'>
	I0831 23:23:58.253561   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <target port='0'/>
	I0831 23:23:58.253572   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     </serial>
	I0831 23:23:58.253582   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <console type='pty'>
	I0831 23:23:58.253595   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <target type='serial' port='0'/>
	I0831 23:23:58.253607   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     </console>
	I0831 23:23:58.253620   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     <rng model='virtio'>
	I0831 23:23:58.253632   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)       <backend model='random'>/dev/random</backend>
	I0831 23:23:58.253643   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     </rng>
	I0831 23:23:58.253653   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     
	I0831 23:23:58.253666   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)     
	I0831 23:23:58.253680   60556 main.go:141] libmachine: (kubernetes-upgrade-828713)   </devices>
	I0831 23:23:58.253690   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) </domain>
	I0831 23:23:58.253699   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) 
	I0831 23:23:58.257930   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:81:f8:b8 in network default
	I0831 23:23:58.258561   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Ensuring networks are active...
	I0831 23:23:58.258580   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:23:58.259131   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Ensuring network default is active
	I0831 23:23:58.259504   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Ensuring network mk-kubernetes-upgrade-828713 is active
	I0831 23:23:58.260042   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Getting domain xml...
	I0831 23:23:58.260712   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Creating domain...
	I0831 23:23:59.470893   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Waiting to get IP...
	I0831 23:23:59.471803   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:23:59.472191   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:23:59.472220   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:59.472176   61134 retry.go:31] will retry after 200.264413ms: waiting for machine to come up
	I0831 23:23:59.674740   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:23:59.675187   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:23:59.675210   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:23:59.675138   61134 retry.go:31] will retry after 333.309447ms: waiting for machine to come up
	I0831 23:24:00.010276   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:00.010674   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:00.010697   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:00.010615   61134 retry.go:31] will retry after 391.234961ms: waiting for machine to come up
	I0831 23:24:00.403177   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:00.403649   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:00.403669   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:00.403607   61134 retry.go:31] will retry after 425.250621ms: waiting for machine to come up
	I0831 23:24:00.830196   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:00.830656   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:00.830683   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:00.830618   61134 retry.go:31] will retry after 483.652721ms: waiting for machine to come up
	I0831 23:24:01.316334   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:01.316798   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:01.316827   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:01.316754   61134 retry.go:31] will retry after 841.954997ms: waiting for machine to come up
	I0831 23:24:02.160246   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:02.160805   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:02.160832   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:02.160717   61134 retry.go:31] will retry after 1.132236601s: waiting for machine to come up
	I0831 23:24:03.294853   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:03.295972   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:03.295998   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:03.295927   61134 retry.go:31] will retry after 1.218600888s: waiting for machine to come up
	I0831 23:24:04.515928   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:04.516437   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:04.516478   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:04.516394   61134 retry.go:31] will retry after 1.576754363s: waiting for machine to come up
	I0831 23:24:06.095322   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:06.095871   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:06.095900   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:06.095822   61134 retry.go:31] will retry after 2.075566767s: waiting for machine to come up
	I0831 23:24:08.173577   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:08.174031   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:08.174060   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:08.173992   61134 retry.go:31] will retry after 1.924948252s: waiting for machine to come up
	I0831 23:24:10.100679   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:10.101210   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:10.101238   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:10.101184   61134 retry.go:31] will retry after 3.602548834s: waiting for machine to come up
	I0831 23:24:13.705862   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:13.706364   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:13.706394   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:13.706311   61134 retry.go:31] will retry after 3.448107257s: waiting for machine to come up
	I0831 23:24:17.157806   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:17.158280   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find current IP address of domain kubernetes-upgrade-828713 in network mk-kubernetes-upgrade-828713
	I0831 23:24:17.158303   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | I0831 23:24:17.158256   61134 retry.go:31] will retry after 3.860739948s: waiting for machine to come up
	I0831 23:24:21.020539   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.021002   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Found IP for machine: 192.168.50.109
	I0831 23:24:21.021022   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Reserving static IP address...
	I0831 23:24:21.021038   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has current primary IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.021364   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-828713", mac: "52:54:00:f1:b2:56", ip: "192.168.50.109"} in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.093137   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Getting to WaitForSSH function...
	I0831 23:24:21.093169   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Reserved static IP address: 192.168.50.109
	I0831 23:24:21.093188   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Waiting for SSH to be available...
	I0831 23:24:21.096319   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.096735   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:21.096756   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.096954   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Using SSH client type: external
	I0831 23:24:21.096972   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Using SSH private key: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa (-rw-------)
	I0831 23:24:21.097005   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0831 23:24:21.097020   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | About to run SSH command:
	I0831 23:24:21.097033   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | exit 0
	I0831 23:24:21.227543   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | SSH cmd err, output: <nil>: 
	I0831 23:24:21.227827   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) KVM machine creation complete!
	I0831 23:24:21.228199   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetConfigRaw
	I0831 23:24:21.228829   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:24:21.229035   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:24:21.229194   60556 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0831 23:24:21.229220   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetState
	I0831 23:24:21.230722   60556 main.go:141] libmachine: Detecting operating system of created instance...
	I0831 23:24:21.230735   60556 main.go:141] libmachine: Waiting for SSH to be available...
	I0831 23:24:21.230741   60556 main.go:141] libmachine: Getting to WaitForSSH function...
	I0831 23:24:21.230746   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:21.232961   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.233291   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:21.233344   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.233428   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:21.233616   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.233784   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.233951   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:21.234124   60556 main.go:141] libmachine: Using SSH client type: native
	I0831 23:24:21.234401   60556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0831 23:24:21.234417   60556 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0831 23:24:21.346813   60556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:24:21.346835   60556 main.go:141] libmachine: Detecting the provisioner...
	I0831 23:24:21.346842   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:21.349865   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.350219   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:21.350272   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.350402   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:21.350590   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.350738   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.350877   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:21.351041   60556 main.go:141] libmachine: Using SSH client type: native
	I0831 23:24:21.351242   60556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0831 23:24:21.351258   60556 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0831 23:24:21.476648   60556 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0831 23:24:21.476755   60556 main.go:141] libmachine: found compatible host: buildroot
	I0831 23:24:21.476770   60556 main.go:141] libmachine: Provisioning with buildroot...
	I0831 23:24:21.476779   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetMachineName
	I0831 23:24:21.477016   60556 buildroot.go:166] provisioning hostname "kubernetes-upgrade-828713"
	I0831 23:24:21.477039   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetMachineName
	I0831 23:24:21.477179   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:21.480051   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.480384   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:21.480418   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.480568   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:21.480752   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.480914   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.481080   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:21.481245   60556 main.go:141] libmachine: Using SSH client type: native
	I0831 23:24:21.481456   60556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0831 23:24:21.481473   60556 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-828713 && echo "kubernetes-upgrade-828713" | sudo tee /etc/hostname
	I0831 23:24:21.610863   60556 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-828713
	
	I0831 23:24:21.610889   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:21.614205   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.614568   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:21.614601   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.614768   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:21.614988   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.615138   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:21.615269   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:21.615483   60556 main.go:141] libmachine: Using SSH client type: native
	I0831 23:24:21.615693   60556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0831 23:24:21.615710   60556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-828713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-828713/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-828713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:24:21.740430   60556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:24:21.740456   60556 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 23:24:21.740493   60556 buildroot.go:174] setting up certificates
	I0831 23:24:21.740503   60556 provision.go:84] configureAuth start
	I0831 23:24:21.740515   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetMachineName
	I0831 23:24:21.740734   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetIP
	I0831 23:24:21.742951   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.743234   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:21.743261   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.743403   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:21.745483   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.745828   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:21.745868   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:21.745981   60556 provision.go:143] copyHostCerts
	I0831 23:24:21.746048   60556 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 23:24:21.746068   60556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 23:24:21.746141   60556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 23:24:21.746252   60556 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 23:24:21.746262   60556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 23:24:21.746293   60556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 23:24:21.746368   60556 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 23:24:21.746384   60556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 23:24:21.746413   60556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 23:24:21.746478   60556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-828713 san=[127.0.0.1 192.168.50.109 kubernetes-upgrade-828713 localhost minikube]
	I0831 23:24:22.042802   60556 provision.go:177] copyRemoteCerts
	I0831 23:24:22.042856   60556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:24:22.042886   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:22.045580   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.045930   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.045962   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.046154   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:22.046369   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.046546   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:22.046688   60556 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa Username:docker}
	I0831 23:24:22.137427   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:24:22.171508   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0831 23:24:22.205396   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 23:24:22.232117   60556 provision.go:87] duration metric: took 491.598135ms to configureAuth
	I0831 23:24:22.232149   60556 buildroot.go:189] setting minikube options for container-runtime
	I0831 23:24:22.232345   60556 config.go:182] Loaded profile config "kubernetes-upgrade-828713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0831 23:24:22.232434   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:22.234945   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.235293   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.235343   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.235487   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:22.235698   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.235883   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.236047   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:22.236218   60556 main.go:141] libmachine: Using SSH client type: native
	I0831 23:24:22.236433   60556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0831 23:24:22.236455   60556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:24:22.505660   60556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:24:22.505689   60556 main.go:141] libmachine: Checking connection to Docker...
	I0831 23:24:22.505701   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetURL
	I0831 23:24:22.507213   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Using libvirt version 6000000
	I0831 23:24:22.510049   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.510441   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.510474   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.510633   60556 main.go:141] libmachine: Docker is up and running!
	I0831 23:24:22.510649   60556 main.go:141] libmachine: Reticulating splines...
	I0831 23:24:22.510658   60556 client.go:171] duration metric: took 24.749561099s to LocalClient.Create
	I0831 23:24:22.510706   60556 start.go:167] duration metric: took 24.749658468s to libmachine.API.Create "kubernetes-upgrade-828713"
	I0831 23:24:22.510720   60556 start.go:293] postStartSetup for "kubernetes-upgrade-828713" (driver="kvm2")
	I0831 23:24:22.510734   60556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:24:22.510758   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:24:22.511020   60556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:24:22.511045   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:22.513811   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.514204   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.514248   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.514369   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:22.514550   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.514705   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:22.514864   60556 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa Username:docker}
	I0831 23:24:22.605432   60556 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:24:22.609946   60556 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 23:24:22.609971   60556 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 23:24:22.610052   60556 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 23:24:22.610182   60556 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 23:24:22.610312   60556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:24:22.620080   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:24:22.651602   60556 start.go:296] duration metric: took 140.866833ms for postStartSetup
	I0831 23:24:22.651706   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetConfigRaw
	I0831 23:24:22.652374   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetIP
	I0831 23:24:22.655662   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.656011   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.656056   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.656342   60556 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/config.json ...
	I0831 23:24:22.656573   60556 start.go:128] duration metric: took 24.916697458s to createHost
	I0831 23:24:22.656601   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:22.659135   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.659531   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.659555   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.659720   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:22.659933   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.660122   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.660285   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:22.660499   60556 main.go:141] libmachine: Using SSH client type: native
	I0831 23:24:22.660810   60556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0831 23:24:22.660848   60556 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 23:24:22.772893   60556 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725146662.730679700
	
	I0831 23:24:22.772918   60556 fix.go:216] guest clock: 1725146662.730679700
	I0831 23:24:22.772929   60556 fix.go:229] Guest: 2024-08-31 23:24:22.7306797 +0000 UTC Remote: 2024-08-31 23:24:22.656585077 +0000 UTC m=+59.161504755 (delta=74.094623ms)
	I0831 23:24:22.772986   60556 fix.go:200] guest clock delta is within tolerance: 74.094623ms
	I0831 23:24:22.772997   60556 start.go:83] releasing machines lock for "kubernetes-upgrade-828713", held for 25.033313952s
	I0831 23:24:22.773039   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:24:22.773273   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetIP
	I0831 23:24:22.776390   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.776959   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.777005   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.777323   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:24:22.777777   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:24:22.777934   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:24:22.778007   60556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:24:22.778054   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:22.778151   60556 ssh_runner.go:195] Run: cat /version.json
	I0831 23:24:22.778169   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:24:22.781555   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.781641   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.782087   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.782113   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.782249   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:22.782276   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:22.782457   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:22.782473   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:24:22.782620   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.782651   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:24:22.782774   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:22.782814   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:24:22.782888   60556 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa Username:docker}
	I0831 23:24:22.782977   60556 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa Username:docker}
	I0831 23:24:22.895673   60556 ssh_runner.go:195] Run: systemctl --version
	I0831 23:24:22.902884   60556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:24:23.075260   60556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 23:24:23.083544   60556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 23:24:23.083615   60556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:24:23.107697   60556 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 23:24:23.107722   60556 start.go:495] detecting cgroup driver to use...
	I0831 23:24:23.107790   60556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:24:23.131831   60556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:24:23.151085   60556 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:24:23.151148   60556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:24:23.168134   60556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:24:23.187047   60556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:24:23.334767   60556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:24:23.524483   60556 docker.go:233] disabling docker service ...
	I0831 23:24:23.524558   60556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:24:23.539968   60556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:24:23.553682   60556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:24:23.695065   60556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:24:23.844652   60556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:24:23.860339   60556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:24:23.881744   60556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0831 23:24:23.881810   60556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:24:23.895737   60556 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:24:23.895827   60556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:24:23.909995   60556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:24:23.921762   60556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:24:23.935777   60556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:24:23.952254   60556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:24:23.966391   60556 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0831 23:24:23.966454   60556 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0831 23:24:23.982376   60556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:24:23.995504   60556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:24:24.152332   60556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:24:24.293589   60556 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:24:24.293642   60556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:24:24.299088   60556 start.go:563] Will wait 60s for crictl version
	I0831 23:24:24.299132   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:24.303475   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:24:24.349506   60556 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 23:24:24.349581   60556 ssh_runner.go:195] Run: crio --version
	I0831 23:24:24.387141   60556 ssh_runner.go:195] Run: crio --version
	I0831 23:24:24.427584   60556 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0831 23:24:24.428892   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetIP
	I0831 23:24:24.432214   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:24.432702   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:24:13 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:24:24.432735   60556 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:24:24.432940   60556 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0831 23:24:24.438677   60556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:24:24.453029   60556 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-828713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-828713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 23:24:24.453162   60556 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 23:24:24.453204   60556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:24:24.491473   60556 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0831 23:24:24.491561   60556 ssh_runner.go:195] Run: which lz4
	I0831 23:24:24.497955   60556 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 23:24:24.502697   60556 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 23:24:24.502741   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0831 23:24:26.211766   60556 crio.go:462] duration metric: took 1.713852838s to copy over tarball
	I0831 23:24:26.211830   60556 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 23:24:28.844067   60556 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.632207985s)
	I0831 23:24:28.844109   60556 crio.go:469] duration metric: took 2.632313742s to extract the tarball
	I0831 23:24:28.844119   60556 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 23:24:28.886779   60556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:24:28.946904   60556 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0831 23:24:28.946926   60556 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0831 23:24:28.946986   60556 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 23:24:28.946994   60556 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0831 23:24:28.947012   60556 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0831 23:24:28.947034   60556 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0831 23:24:28.947080   60556 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0831 23:24:28.947082   60556 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0831 23:24:28.947049   60556 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0831 23:24:28.947061   60556 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0831 23:24:28.948816   60556 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0831 23:24:28.948836   60556 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0831 23:24:28.948859   60556 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0831 23:24:28.948876   60556 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0831 23:24:28.948887   60556 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 23:24:28.948816   60556 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0831 23:24:28.948910   60556 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0831 23:24:28.948820   60556 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0831 23:24:29.132700   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0831 23:24:29.135622   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0831 23:24:29.144384   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0831 23:24:29.155034   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0831 23:24:29.175960   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0831 23:24:29.183929   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0831 23:24:29.191398   60556 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0831 23:24:29.191443   60556 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0831 23:24:29.191491   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:29.250809   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0831 23:24:29.278616   60556 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0831 23:24:29.278658   60556 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0831 23:24:29.278711   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:29.319819   60556 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0831 23:24:29.319867   60556 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0831 23:24:29.319872   60556 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0831 23:24:29.319904   60556 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0831 23:24:29.319912   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:29.319946   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:29.336230   60556 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0831 23:24:29.336284   60556 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0831 23:24:29.336331   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:29.338097   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0831 23:24:29.338159   60556 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0831 23:24:29.338197   60556 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0831 23:24:29.338240   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:29.362753   60556 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0831 23:24:29.362800   60556 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0831 23:24:29.362803   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0831 23:24:29.362847   60556 ssh_runner.go:195] Run: which crictl
	I0831 23:24:29.362898   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0831 23:24:29.362903   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0831 23:24:29.363009   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0831 23:24:29.363047   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0831 23:24:29.432410   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0831 23:24:29.495915   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0831 23:24:29.496014   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0831 23:24:29.495967   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0831 23:24:29.496016   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0831 23:24:29.502384   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0831 23:24:29.502577   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0831 23:24:29.541772   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0831 23:24:29.671030   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0831 23:24:29.671049   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0831 23:24:29.671097   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0831 23:24:29.671166   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0831 23:24:29.675775   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0831 23:24:29.675920   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0831 23:24:29.681225   60556 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0831 23:24:29.783134   60556 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0831 23:24:29.805654   60556 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0831 23:24:29.805698   60556 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0831 23:24:29.805734   60556 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0831 23:24:29.805701   60556 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0831 23:24:29.805851   60556 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0831 23:24:29.840748   60556 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0831 23:24:30.103054   60556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 23:24:30.248649   60556 cache_images.go:92] duration metric: took 1.301704224s to LoadCachedImages
	W0831 23:24:30.248769   60556 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18943-13149/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0831 23:24:30.248789   60556 kubeadm.go:934] updating node { 192.168.50.109 8443 v1.20.0 crio true true} ...
	I0831 23:24:30.248945   60556 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-828713 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-828713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:24:30.249041   60556 ssh_runner.go:195] Run: crio config
	I0831 23:24:30.304516   60556 cni.go:84] Creating CNI manager for ""
	I0831 23:24:30.304546   60556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 23:24:30.304569   60556 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 23:24:30.304592   60556 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-828713 NodeName:kubernetes-upgrade-828713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0831 23:24:30.304751   60556 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-828713"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 23:24:30.304820   60556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0831 23:24:30.315701   60556 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:24:30.315829   60556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 23:24:30.326258   60556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0831 23:24:30.344182   60556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:24:30.362402   60556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0831 23:24:30.382146   60556 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I0831 23:24:30.386150   60556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:24:30.400223   60556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:24:30.523044   60556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:24:30.541002   60556 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713 for IP: 192.168.50.109
	I0831 23:24:30.541073   60556 certs.go:194] generating shared ca certs ...
	I0831 23:24:30.541123   60556 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:24:30.541456   60556 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 23:24:30.541528   60556 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 23:24:30.541551   60556 certs.go:256] generating profile certs ...
	I0831 23:24:30.541647   60556 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.key
	I0831 23:24:30.541662   60556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.crt with IP's: []
	I0831 23:24:30.640996   60556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.crt ...
	I0831 23:24:30.641022   60556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.crt: {Name:mk242b83bf85a4637c1be72f81333fb6c215b398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:24:30.641220   60556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.key ...
	I0831 23:24:30.641239   60556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.key: {Name:mk4c6b71e04424a599e42b304464d082b32df794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:24:30.641356   60556 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.key.5d7913d8
	I0831 23:24:30.641388   60556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.crt.5d7913d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.109]
	I0831 23:24:30.825934   60556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.crt.5d7913d8 ...
	I0831 23:24:30.825972   60556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.crt.5d7913d8: {Name:mkcf9dc1b894966cbe63b07772586e5ed582de7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:24:30.826160   60556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.key.5d7913d8 ...
	I0831 23:24:30.826180   60556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.key.5d7913d8: {Name:mk97d9c565e634ddf458311eb44250d42928a822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:24:30.826279   60556 certs.go:381] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.crt.5d7913d8 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.crt
	I0831 23:24:30.826422   60556 certs.go:385] copying /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.key.5d7913d8 -> /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.key
	I0831 23:24:30.826506   60556 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.key
	I0831 23:24:30.826526   60556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.crt with IP's: []
	I0831 23:24:30.996324   60556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.crt ...
	I0831 23:24:30.996351   60556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.crt: {Name:mkfb5032cc828f781275174b8838582c96c685e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:24:30.996524   60556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.key ...
	I0831 23:24:30.996545   60556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.key: {Name:mkefa2055e289941a23216a867f947f3b730f46b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:24:30.996780   60556 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 23:24:30.996826   60556 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 23:24:30.996841   60556 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 23:24:30.996873   60556 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:24:30.996907   60556 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:24:30.996939   60556 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 23:24:30.996990   60556 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:24:30.997634   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:24:31.023772   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:24:31.047699   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:24:31.073858   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:24:31.099338   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0831 23:24:31.166908   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 23:24:31.234998   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:24:31.267241   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:24:31.293250   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:24:31.318212   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 23:24:31.349013   60556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 23:24:31.377004   60556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 23:24:31.405174   60556 ssh_runner.go:195] Run: openssl version
	I0831 23:24:31.415031   60556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 23:24:31.433045   60556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 23:24:31.442705   60556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 23:24:31.442780   60556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 23:24:31.455373   60556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 23:24:31.473223   60556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 23:24:31.494837   60556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 23:24:31.500905   60556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 23:24:31.501005   60556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 23:24:31.518434   60556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:24:31.530302   60556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:24:31.541557   60556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:24:31.546317   60556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:24:31.546406   60556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:24:31.552292   60556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:24:31.563958   60556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:24:31.569683   60556 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 23:24:31.569748   60556 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-828713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-828713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:24:31.569852   60556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 23:24:31.569902   60556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 23:24:31.612250   60556 cri.go:89] found id: ""
	I0831 23:24:31.612335   60556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 23:24:31.623375   60556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 23:24:31.634865   60556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 23:24:31.644789   60556 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 23:24:31.644806   60556 kubeadm.go:157] found existing configuration files:
	
	I0831 23:24:31.644854   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 23:24:31.654683   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 23:24:31.654735   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 23:24:31.666133   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 23:24:31.677253   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 23:24:31.677311   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 23:24:31.687625   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 23:24:31.698042   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 23:24:31.698100   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 23:24:31.708944   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 23:24:31.719073   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 23:24:31.719138   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 23:24:31.729499   60556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 23:24:31.851508   60556 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0831 23:24:31.851838   60556 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 23:24:32.008574   60556 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 23:24:32.008744   60556 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 23:24:32.008903   60556 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0831 23:24:32.276831   60556 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 23:24:32.446433   60556 out.go:235]   - Generating certificates and keys ...
	I0831 23:24:32.446587   60556 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 23:24:32.446702   60556 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 23:24:32.446842   60556 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 23:24:32.446953   60556 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 23:24:32.784217   60556 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 23:24:32.838251   60556 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 23:24:33.068927   60556 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 23:24:33.069121   60556 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-828713 localhost] and IPs [192.168.50.109 127.0.0.1 ::1]
	I0831 23:24:33.480962   60556 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 23:24:33.481279   60556 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-828713 localhost] and IPs [192.168.50.109 127.0.0.1 ::1]
	I0831 23:24:33.659412   60556 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 23:24:33.842928   60556 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 23:24:33.938347   60556 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 23:24:33.938666   60556 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 23:24:34.131212   60556 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 23:24:34.185971   60556 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 23:24:34.516968   60556 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 23:24:34.784456   60556 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 23:24:34.812899   60556 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 23:24:34.814307   60556 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 23:24:34.814508   60556 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 23:24:34.971107   60556 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 23:24:34.972830   60556 out.go:235]   - Booting up control plane ...
	I0831 23:24:34.972985   60556 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 23:24:34.981256   60556 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 23:24:34.982397   60556 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 23:24:34.983375   60556 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 23:24:34.988015   60556 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0831 23:25:14.954066   60556 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0831 23:25:14.954599   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:25:14.954895   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:25:19.953852   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:25:19.954107   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:25:29.952916   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:25:29.953149   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:25:49.953307   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:25:49.953595   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:26:29.952352   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:26:29.952617   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:26:29.952642   60556 kubeadm.go:310] 
	I0831 23:26:29.952696   60556 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0831 23:26:29.952739   60556 kubeadm.go:310] 		timed out waiting for the condition
	I0831 23:26:29.952749   60556 kubeadm.go:310] 
	I0831 23:26:29.952795   60556 kubeadm.go:310] 	This error is likely caused by:
	I0831 23:26:29.952842   60556 kubeadm.go:310] 		- The kubelet is not running
	I0831 23:26:29.952992   60556 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0831 23:26:29.953001   60556 kubeadm.go:310] 
	I0831 23:26:29.953145   60556 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0831 23:26:29.953248   60556 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0831 23:26:29.953311   60556 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0831 23:26:29.953321   60556 kubeadm.go:310] 
	I0831 23:26:29.953462   60556 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0831 23:26:29.953535   60556 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0831 23:26:29.953542   60556 kubeadm.go:310] 
	I0831 23:26:29.953678   60556 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0831 23:26:29.953810   60556 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0831 23:26:29.953931   60556 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0831 23:26:29.954025   60556 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0831 23:26:29.954037   60556 kubeadm.go:310] 
	I0831 23:26:29.954373   60556 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 23:26:29.954484   60556 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0831 23:26:29.954654   60556 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0831 23:26:29.954737   60556 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-828713 localhost] and IPs [192.168.50.109 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-828713 localhost] and IPs [192.168.50.109 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-828713 localhost] and IPs [192.168.50.109 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-828713 localhost] and IPs [192.168.50.109 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0831 23:26:29.954781   60556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0831 23:26:30.734828   60556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:26:30.748515   60556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 23:26:30.757883   60556 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 23:26:30.757900   60556 kubeadm.go:157] found existing configuration files:
	
	I0831 23:26:30.757954   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 23:26:30.766655   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 23:26:30.766704   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 23:26:30.775642   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 23:26:30.784575   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 23:26:30.784628   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 23:26:30.793943   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 23:26:30.806291   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 23:26:30.806342   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 23:26:30.818683   60556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 23:26:30.827175   60556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 23:26:30.827231   60556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 23:26:30.839369   60556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 23:26:30.920792   60556 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0831 23:26:30.920869   60556 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 23:26:31.078060   60556 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 23:26:31.078196   60556 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 23:26:31.078352   60556 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0831 23:26:31.256113   60556 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 23:26:31.258133   60556 out.go:235]   - Generating certificates and keys ...
	I0831 23:26:31.258241   60556 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 23:26:31.258326   60556 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 23:26:31.258435   60556 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0831 23:26:31.258508   60556 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0831 23:26:31.258611   60556 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0831 23:26:31.258681   60556 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0831 23:26:31.258763   60556 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0831 23:26:31.259039   60556 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0831 23:26:31.259437   60556 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0831 23:26:31.259889   60556 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0831 23:26:31.259958   60556 kubeadm.go:310] [certs] Using the existing "sa" key
	I0831 23:26:31.260040   60556 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 23:26:31.369552   60556 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 23:26:31.560622   60556 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 23:26:31.884724   60556 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 23:26:32.086336   60556 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 23:26:32.101294   60556 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 23:26:32.102369   60556 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 23:26:32.102447   60556 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 23:26:32.229823   60556 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 23:26:32.231940   60556 out.go:235]   - Booting up control plane ...
	I0831 23:26:32.232071   60556 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 23:26:32.236702   60556 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 23:26:32.237536   60556 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 23:26:32.238158   60556 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 23:26:32.242996   60556 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0831 23:27:12.244470   60556 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0831 23:27:12.246688   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:27:12.246970   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:27:17.247440   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:27:17.247717   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:27:27.248211   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:27:27.248471   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:27:47.249745   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:27:47.249960   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:28:27.249707   60556 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0831 23:28:27.249921   60556 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0831 23:28:27.249934   60556 kubeadm.go:310] 
	I0831 23:28:27.249987   60556 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0831 23:28:27.250045   60556 kubeadm.go:310] 		timed out waiting for the condition
	I0831 23:28:27.250056   60556 kubeadm.go:310] 
	I0831 23:28:27.250126   60556 kubeadm.go:310] 	This error is likely caused by:
	I0831 23:28:27.250214   60556 kubeadm.go:310] 		- The kubelet is not running
	I0831 23:28:27.250326   60556 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0831 23:28:27.250334   60556 kubeadm.go:310] 
	I0831 23:28:27.250457   60556 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0831 23:28:27.250507   60556 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0831 23:28:27.250564   60556 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0831 23:28:27.250575   60556 kubeadm.go:310] 
	I0831 23:28:27.250737   60556 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0831 23:28:27.250833   60556 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0831 23:28:27.250845   60556 kubeadm.go:310] 
	I0831 23:28:27.251024   60556 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0831 23:28:27.251110   60556 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0831 23:28:27.251222   60556 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0831 23:28:27.251343   60556 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0831 23:28:27.251351   60556 kubeadm.go:310] 
	I0831 23:28:27.251943   60556 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 23:28:27.252029   60556 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0831 23:28:27.252084   60556 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0831 23:28:27.252149   60556 kubeadm.go:394] duration metric: took 3m55.682406478s to StartCluster
	I0831 23:28:27.252185   60556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:28:27.252229   60556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:28:27.295088   60556 cri.go:89] found id: ""
	I0831 23:28:27.295114   60556 logs.go:276] 0 containers: []
	W0831 23:28:27.295122   60556 logs.go:278] No container was found matching "kube-apiserver"
	I0831 23:28:27.295130   60556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:28:27.295187   60556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:28:27.331717   60556 cri.go:89] found id: ""
	I0831 23:28:27.331743   60556 logs.go:276] 0 containers: []
	W0831 23:28:27.331754   60556 logs.go:278] No container was found matching "etcd"
	I0831 23:28:27.331760   60556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:28:27.331816   60556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:28:27.368713   60556 cri.go:89] found id: ""
	I0831 23:28:27.368744   60556 logs.go:276] 0 containers: []
	W0831 23:28:27.368754   60556 logs.go:278] No container was found matching "coredns"
	I0831 23:28:27.368765   60556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:28:27.368832   60556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:28:27.405798   60556 cri.go:89] found id: ""
	I0831 23:28:27.405822   60556 logs.go:276] 0 containers: []
	W0831 23:28:27.405832   60556 logs.go:278] No container was found matching "kube-scheduler"
	I0831 23:28:27.405840   60556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:28:27.405906   60556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:28:27.442161   60556 cri.go:89] found id: ""
	I0831 23:28:27.442184   60556 logs.go:276] 0 containers: []
	W0831 23:28:27.442192   60556 logs.go:278] No container was found matching "kube-proxy"
	I0831 23:28:27.442197   60556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:28:27.442241   60556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:28:27.480159   60556 cri.go:89] found id: ""
	I0831 23:28:27.480189   60556 logs.go:276] 0 containers: []
	W0831 23:28:27.480199   60556 logs.go:278] No container was found matching "kube-controller-manager"
	I0831 23:28:27.480206   60556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:28:27.480287   60556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:28:27.517572   60556 cri.go:89] found id: ""
	I0831 23:28:27.517596   60556 logs.go:276] 0 containers: []
	W0831 23:28:27.517606   60556 logs.go:278] No container was found matching "kindnet"
	I0831 23:28:27.517617   60556 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:28:27.517630   60556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:28:27.640220   60556 logs.go:123] Gathering logs for container status ...
	I0831 23:28:27.640256   60556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:28:27.679755   60556 logs.go:123] Gathering logs for kubelet ...
	I0831 23:28:27.679785   60556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:28:27.739945   60556 logs.go:123] Gathering logs for dmesg ...
	I0831 23:28:27.739988   60556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:28:27.754730   60556 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:28:27.754761   60556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0831 23:28:27.884092   60556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0831 23:28:27.884122   60556 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0831 23:28:27.884182   60556 out.go:270] * 
	* 
	W0831 23:28:27.884245   60556 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0831 23:28:27.884266   60556 out.go:270] * 
	* 
	W0831 23:28:27.885063   60556 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 23:28:28.014658   60556 out.go:201] 
	W0831 23:28:28.028317   60556 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0831 23:28:28.028386   60556 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0831 23:28:28.028415   60556 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0831 23:28:28.122014   60556 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-828713
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-828713: (6.51371364s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-828713 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-828713 status --format={{.Host}}: exit status 7 (67.046406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.214783951s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-828713 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.026006ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-828713] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-828713
	    minikube start -p kubernetes-upgrade-828713 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8287132 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-828713 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-828713 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (14.527724145s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-31 23:29:23.63718212 +0000 UTC m=+5014.463118382
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-828713 -n kubernetes-upgrade-828713
helpers_test.go:245: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-828713 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-828713 logs -n 25: (1.36127335s)
helpers_test.go:253: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | status kubelet --all --full                          |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo journalctl                       | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo cat                              | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo cat                              | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC |                     |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo cat                              | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo docker                           | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo cat                              | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo cat                              | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo                                  | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo cat                              | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo cat                              | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo containerd                       | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo systemctl                        | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo find                             | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-009399 sudo crio                             | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-009399                                       | auto-009399    | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC | 31 Aug 24 23:29 UTC |
	| start   | -p kindnet-009399                                    | kindnet-009399 | jenkins | v1.33.1 | 31 Aug 24 23:29 UTC |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                          |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 23:29:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 23:29:21.820860   66851 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:29:21.821307   66851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:29:21.821326   66851 out.go:358] Setting ErrFile to fd 2...
	I0831 23:29:21.821334   66851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:29:21.821798   66851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:29:21.822704   66851 out.go:352] Setting JSON to false
	I0831 23:29:21.823804   66851 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7909,"bootTime":1725139053,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 23:29:21.823873   66851 start.go:139] virtualization: kvm guest
	I0831 23:29:21.825888   66851 out.go:177] * [kindnet-009399] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 23:29:21.827796   66851 notify.go:220] Checking for updates...
	I0831 23:29:21.827814   66851 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:29:21.829300   66851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:29:21.830913   66851 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:29:21.832361   66851 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:29:21.833813   66851 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 23:29:21.835283   66851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:29:21.837065   66851 config.go:182] Loaded profile config "cert-expiration-678368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:29:21.837164   66851 config.go:182] Loaded profile config "kubernetes-upgrade-828713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:29:21.837285   66851 config.go:182] Loaded profile config "pause-945775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:29:21.837352   66851 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:29:21.874418   66851 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 23:29:21.875870   66851 start.go:297] selected driver: kvm2
	I0831 23:29:21.875884   66851 start.go:901] validating driver "kvm2" against <nil>
	I0831 23:29:21.875896   66851 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:29:21.876583   66851 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:29:21.876655   66851 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 23:29:21.892131   66851 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 23:29:21.892174   66851 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 23:29:21.892365   66851 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:29:21.892424   66851 cni.go:84] Creating CNI manager for "kindnet"
	I0831 23:29:21.892432   66851 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 23:29:21.892475   66851 start.go:340] cluster config:
	{Name:kindnet-009399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-009399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:29:21.892562   66851 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:29:21.895208   66851 out.go:177] * Starting "kindnet-009399" primary control-plane node in "kindnet-009399" cluster
	I0831 23:29:21.896571   66851 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:29:21.896609   66851 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 23:29:21.896618   66851 cache.go:56] Caching tarball of preloaded images
	I0831 23:29:21.896695   66851 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 23:29:21.896708   66851 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:29:21.896828   66851 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kindnet-009399/config.json ...
	I0831 23:29:21.896848   66851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kindnet-009399/config.json: {Name:mk9737aca10b22e7481338c010b1a0d8b57beff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:29:21.896987   66851 start.go:360] acquireMachinesLock for kindnet-009399: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 23:29:21.897026   66851 start.go:364] duration metric: took 24.68µs to acquireMachinesLock for "kindnet-009399"
	I0831 23:29:21.897045   66851 start.go:93] Provisioning new machine with config: &{Name:kindnet-009399 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:kindnet-009399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:29:21.897123   66851 start.go:125] createHost starting for "" (driver="kvm2")
	I0831 23:29:20.986887   65533 api_server.go:279] https://192.168.50.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0831 23:29:20.986919   65533 api_server.go:103] status: https://192.168.50.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0831 23:29:20.986934   65533 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8443/healthz ...
	I0831 23:29:21.067379   65533 api_server.go:279] https://192.168.50.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:29:21.067413   65533 api_server.go:103] status: https://192.168.50.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:29:21.168560   65533 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8443/healthz ...
	I0831 23:29:21.173689   65533 api_server.go:279] https://192.168.50.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:29:21.173712   65533 api_server.go:103] status: https://192.168.50.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:29:21.668231   65533 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8443/healthz ...
	I0831 23:29:21.674747   65533 api_server.go:279] https://192.168.50.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:29:21.674775   65533 api_server.go:103] status: https://192.168.50.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:29:22.167986   65533 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8443/healthz ...
	I0831 23:29:22.172433   65533 api_server.go:279] https://192.168.50.109:8443/healthz returned 200:
	ok
	I0831 23:29:22.179407   65533 api_server.go:141] control plane version: v1.31.0
	I0831 23:29:22.179436   65533 api_server.go:131] duration metric: took 4.511590031s to wait for apiserver health ...
	I0831 23:29:22.179447   65533 cni.go:84] Creating CNI manager for ""
	I0831 23:29:22.179456   65533 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 23:29:22.181897   65533 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 23:29:22.183385   65533 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 23:29:22.194257   65533 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 23:29:22.216224   65533 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 23:29:22.216291   65533 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 23:29:22.216305   65533 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 23:29:22.227789   65533 system_pods.go:59] 4 kube-system pods found
	I0831 23:29:22.227822   65533 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-828713" [d6e415e0-dbf2-4e19-99b7-f2ce86cac5cd] Pending
	I0831 23:29:22.227831   65533 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-828713" [f5843b07-3a99-4477-8910-d5064a3d6560] Pending
	I0831 23:29:22.227837   65533 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-828713" [2170bcdf-11f5-492d-b939-0e80138ed4bf] Pending
	I0831 23:29:22.227847   65533 system_pods.go:61] "storage-provisioner" [e9192e00-b106-44ac-92fd-ec4febe81c7d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0831 23:29:22.227854   65533 system_pods.go:74] duration metric: took 11.610985ms to wait for pod list to return data ...
	I0831 23:29:22.227863   65533 node_conditions.go:102] verifying NodePressure condition ...
	I0831 23:29:22.233538   65533 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 23:29:22.233561   65533 node_conditions.go:123] node cpu capacity is 2
	I0831 23:29:22.233570   65533 node_conditions.go:105] duration metric: took 5.70206ms to run NodePressure ...
	I0831 23:29:22.233587   65533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 23:29:22.545566   65533 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 23:29:22.556718   65533 ops.go:34] apiserver oom_adj: -16
	I0831 23:29:22.556812   65533 kubeadm.go:597] duration metric: took 7.619705088s to restartPrimaryControlPlane
	I0831 23:29:22.556841   65533 kubeadm.go:394] duration metric: took 7.720686499s to StartCluster
	I0831 23:29:22.556866   65533 settings.go:142] acquiring lock: {Name:mkec6b4f5d3301688503002977bc4d63aab7adcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:29:22.556950   65533 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:29:22.558619   65533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/kubeconfig: {Name:mkc6d6b60cc62b336d228fe4b49e098aa4d94f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:29:22.558883   65533 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:29:22.558956   65533 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 23:29:22.559018   65533 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-828713"
	I0831 23:29:22.559049   65533 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-828713"
	W0831 23:29:22.559062   65533 addons.go:243] addon storage-provisioner should already be in state true
	I0831 23:29:22.559089   65533 config.go:182] Loaded profile config "kubernetes-upgrade-828713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:29:22.559093   65533 host.go:66] Checking if "kubernetes-upgrade-828713" exists ...
	I0831 23:29:22.559044   65533 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-828713"
	I0831 23:29:22.559176   65533 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-828713"
	I0831 23:29:22.559434   65533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:29:22.559459   65533 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:29:22.559610   65533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:29:22.559635   65533 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:29:22.561375   65533 out.go:177] * Verifying Kubernetes components...
	I0831 23:29:22.563195   65533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:29:22.576115   65533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
	I0831 23:29:22.576478   65533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0831 23:29:22.576599   65533 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:29:22.576904   65533 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:29:22.577084   65533 main.go:141] libmachine: Using API Version  1
	I0831 23:29:22.577105   65533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:29:22.577314   65533 main.go:141] libmachine: Using API Version  1
	I0831 23:29:22.577337   65533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:29:22.577425   65533 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:29:22.577628   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetState
	I0831 23:29:22.577668   65533 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:29:22.578204   65533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:29:22.578233   65533 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:29:22.580388   65533 kapi.go:59] client config for kubernetes-upgrade-828713: &rest.Config{Host:"https://192.168.50.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/profiles/kubernetes-upgrade-828713/client.key", CAFile:"/home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f192a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 23:29:22.580691   65533 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-828713"
	W0831 23:29:22.580705   65533 addons.go:243] addon default-storageclass should already be in state true
	I0831 23:29:22.580734   65533 host.go:66] Checking if "kubernetes-upgrade-828713" exists ...
	I0831 23:29:22.581116   65533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:29:22.581145   65533 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:29:22.595286   65533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0831 23:29:22.595770   65533 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:29:22.596214   65533 main.go:141] libmachine: Using API Version  1
	I0831 23:29:22.596234   65533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:29:22.596555   65533 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:29:22.597057   65533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:29:22.597098   65533 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:29:22.597928   65533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I0831 23:29:22.598270   65533 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:29:22.598720   65533 main.go:141] libmachine: Using API Version  1
	I0831 23:29:22.598737   65533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:29:22.599258   65533 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:29:22.599471   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetState
	I0831 23:29:22.601085   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:29:22.603275   65533 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 23:29:22.604651   65533 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:29:22.604672   65533 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 23:29:22.604690   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:29:22.608036   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:29:22.608433   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:28:46 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:29:22.608458   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:29:22.608582   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:29:22.608763   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:29:22.608909   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:29:22.609135   65533 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa Username:docker}
	I0831 23:29:22.614485   65533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0831 23:29:22.614877   65533 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:29:22.615421   65533 main.go:141] libmachine: Using API Version  1
	I0831 23:29:22.615445   65533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:29:22.615779   65533 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:29:22.615965   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetState
	I0831 23:29:22.617672   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .DriverName
	I0831 23:29:22.617896   65533 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 23:29:22.617910   65533 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 23:29:22.617929   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHHostname
	I0831 23:29:22.620789   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:29:22.622055   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:b2:56", ip: ""} in network mk-kubernetes-upgrade-828713: {Iface:virbr2 ExpiryTime:2024-09-01 00:28:46 +0000 UTC Type:0 Mac:52:54:00:f1:b2:56 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:kubernetes-upgrade-828713 Clientid:01:52:54:00:f1:b2:56}
	I0831 23:29:22.622082   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | domain kubernetes-upgrade-828713 has defined IP address 192.168.50.109 and MAC address 52:54:00:f1:b2:56 in network mk-kubernetes-upgrade-828713
	I0831 23:29:22.622288   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHPort
	I0831 23:29:22.622481   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHKeyPath
	I0831 23:29:22.622638   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .GetSSHUsername
	I0831 23:29:22.622850   65533 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/kubernetes-upgrade-828713/id_rsa Username:docker}
	I0831 23:29:22.713969   65533 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:29:22.731539   65533 api_server.go:52] waiting for apiserver process to appear ...
	I0831 23:29:22.731627   65533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:29:22.747613   65533 api_server.go:72] duration metric: took 188.690359ms to wait for apiserver process to appear ...
	I0831 23:29:22.747639   65533 api_server.go:88] waiting for apiserver healthz status ...
	I0831 23:29:22.747660   65533 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8443/healthz ...
	I0831 23:29:22.751614   65533 api_server.go:279] https://192.168.50.109:8443/healthz returned 200:
	ok
	I0831 23:29:22.752484   65533 api_server.go:141] control plane version: v1.31.0
	I0831 23:29:22.752508   65533 api_server.go:131] duration metric: took 4.861342ms to wait for apiserver health ...
	I0831 23:29:22.752517   65533 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 23:29:22.755952   65533 system_pods.go:59] 4 kube-system pods found
	I0831 23:29:22.755972   65533 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-828713" [d6e415e0-dbf2-4e19-99b7-f2ce86cac5cd] Pending
	I0831 23:29:22.755978   65533 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-828713" [f5843b07-3a99-4477-8910-d5064a3d6560] Pending
	I0831 23:29:22.755982   65533 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-828713" [2170bcdf-11f5-492d-b939-0e80138ed4bf] Pending
	I0831 23:29:22.755989   65533 system_pods.go:61] "storage-provisioner" [e9192e00-b106-44ac-92fd-ec4febe81c7d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0831 23:29:22.755997   65533 system_pods.go:74] duration metric: took 3.474288ms to wait for pod list to return data ...
	I0831 23:29:22.756009   65533 kubeadm.go:582] duration metric: took 197.09383ms to wait for: map[apiserver:true system_pods:true]
	I0831 23:29:22.756026   65533 node_conditions.go:102] verifying NodePressure condition ...
	I0831 23:29:22.758190   65533 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 23:29:22.758208   65533 node_conditions.go:123] node cpu capacity is 2
	I0831 23:29:22.758217   65533 node_conditions.go:105] duration metric: took 2.187578ms to run NodePressure ...
	I0831 23:29:22.758226   65533 start.go:241] waiting for startup goroutines ...
	I0831 23:29:22.799633   65533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:29:22.816164   65533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 23:29:23.541549   65533 main.go:141] libmachine: Making call to close driver server
	I0831 23:29:23.541583   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .Close
	I0831 23:29:23.541727   65533 main.go:141] libmachine: Making call to close driver server
	I0831 23:29:23.541746   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .Close
	I0831 23:29:23.542030   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Closing plugin on server side
	I0831 23:29:23.542066   65533 main.go:141] libmachine: Successfully made call to close driver server
	I0831 23:29:23.542074   65533 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 23:29:23.542081   65533 main.go:141] libmachine: Making call to close driver server
	I0831 23:29:23.542089   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .Close
	I0831 23:29:23.542170   65533 main.go:141] libmachine: Successfully made call to close driver server
	I0831 23:29:23.542174   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Closing plugin on server side
	I0831 23:29:23.542186   65533 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 23:29:23.542198   65533 main.go:141] libmachine: Making call to close driver server
	I0831 23:29:23.542207   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .Close
	I0831 23:29:23.542344   65533 main.go:141] libmachine: Successfully made call to close driver server
	I0831 23:29:23.542360   65533 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 23:29:23.542477   65533 main.go:141] libmachine: Successfully made call to close driver server
	I0831 23:29:23.542498   65533 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 23:29:23.542518   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Closing plugin on server side
	I0831 23:29:23.550117   65533 main.go:141] libmachine: Making call to close driver server
	I0831 23:29:23.550137   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) Calling .Close
	I0831 23:29:23.550378   65533 main.go:141] libmachine: (kubernetes-upgrade-828713) DBG | Closing plugin on server side
	I0831 23:29:23.550438   65533 main.go:141] libmachine: Successfully made call to close driver server
	I0831 23:29:23.550461   65533 main.go:141] libmachine: Making call to close connection to plugin binary
	I0831 23:29:23.552975   65533 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0831 23:29:23.554253   65533 addons.go:510] duration metric: took 995.303767ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0831 23:29:23.554295   65533 start.go:246] waiting for cluster config update ...
	I0831 23:29:23.554311   65533 start.go:255] writing updated cluster config ...
	I0831 23:29:23.554629   65533 ssh_runner.go:195] Run: rm -f paused
	I0831 23:29:23.613418   65533 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 23:29:23.615320   65533 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-828713" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.326342026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725146964326305481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57a39bd1-4c7d-4f2e-8dd9-ee74177cd3e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.326944468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f077626-eb80-47c8-90ab-018e81edae04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.327124337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f077626-eb80-47c8-90ab-018e81edae04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.327510745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c500d97b0d8ed00b2ed43f3a298c5e6611f7b6eb7fd6e09675e7e94795a6e86e,PodSandboxId:bf1da61584bad674a2b7b5add83ec07c29b4214afddbbe8551d2cb522e4d7244,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725146957122803260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8b9ca065c19ac6f56cc3c089165efa2b6d8536aa486b09f24f9c0babc967def,PodSandboxId:9adb718c41f307036ff3603bcab1040017569f6dad2a9f37b0d25d4c2a592328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725146957131979165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269e043f658930162c6a6695f8d8714c0aa3229caff859c75f043abaf300c24,PodSandboxId:4231cd275380a3f59dedc9c303d689dee073f0352a91b52754a63cf393387602,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725146957103077560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552b905e032eef48d9ce4fc61c3a99bfe504301a9fdc3c021ce965fc679c06b4,PodSandboxId:a73e99c8ebbecc927ed993de21653d23cd715e614e7dcf527c97a60046ebc8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725146957100480938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cca34c9a99a44fee7c37476020c5e5ebabab85ea42f6be6b55a1daa0a23ad8f,PodSandboxId:0152d96511d5f00307ba9e99d1ea29b16eb2fd88964d1a92a682c9bf42e03e92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725146951420983470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33224bae77e04c8fdbf715fb6914260b6d8cc5445c7c34bb76d5b9b2ff0a55df,PodSandboxId:c00dd4e4362bb758827e943477840af885d4db02be9a532d81a30dde29b44837,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725146951366164049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fa147bb25e20e6e45b5023afb68ea1a7294a329cea308e5ccac8afd9e093ed,PodSandboxId:292695f6da67cf20aba8fd3b1c7e3a06568ee5e2d6a86040ced22b69281db795,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725146951364407744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c611b6316aa0347befd604f2b958d607824c1c8e6b6711fed197e93d7be7b289,PodSandboxId:d2f393550d9976fd2dcf4d23506dc7d61e5f66a72675c57f3361af4c85ed3a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725146951274541132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f077626-eb80-47c8-90ab-018e81edae04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.364002793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20b4fda3-ea3a-40e9-8a5e-506e33b1c0f1 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.364150499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20b4fda3-ea3a-40e9-8a5e-506e33b1c0f1 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.366007506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65bddbae-a9d5-4705-a38b-d6d21cd953cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.366500113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725146964366475807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65bddbae-a9d5-4705-a38b-d6d21cd953cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.367339001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23f2dd07-1127-4280-ad49-1025c13f8df1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.367458890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23f2dd07-1127-4280-ad49-1025c13f8df1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.367661080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c500d97b0d8ed00b2ed43f3a298c5e6611f7b6eb7fd6e09675e7e94795a6e86e,PodSandboxId:bf1da61584bad674a2b7b5add83ec07c29b4214afddbbe8551d2cb522e4d7244,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725146957122803260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8b9ca065c19ac6f56cc3c089165efa2b6d8536aa486b09f24f9c0babc967def,PodSandboxId:9adb718c41f307036ff3603bcab1040017569f6dad2a9f37b0d25d4c2a592328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725146957131979165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269e043f658930162c6a6695f8d8714c0aa3229caff859c75f043abaf300c24,PodSandboxId:4231cd275380a3f59dedc9c303d689dee073f0352a91b52754a63cf393387602,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725146957103077560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552b905e032eef48d9ce4fc61c3a99bfe504301a9fdc3c021ce965fc679c06b4,PodSandboxId:a73e99c8ebbecc927ed993de21653d23cd715e614e7dcf527c97a60046ebc8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725146957100480938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cca34c9a99a44fee7c37476020c5e5ebabab85ea42f6be6b55a1daa0a23ad8f,PodSandboxId:0152d96511d5f00307ba9e99d1ea29b16eb2fd88964d1a92a682c9bf42e03e92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725146951420983470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33224bae77e04c8fdbf715fb6914260b6d8cc5445c7c34bb76d5b9b2ff0a55df,PodSandboxId:c00dd4e4362bb758827e943477840af885d4db02be9a532d81a30dde29b44837,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725146951366164049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fa147bb25e20e6e45b5023afb68ea1a7294a329cea308e5ccac8afd9e093ed,PodSandboxId:292695f6da67cf20aba8fd3b1c7e3a06568ee5e2d6a86040ced22b69281db795,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725146951364407744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c611b6316aa0347befd604f2b958d607824c1c8e6b6711fed197e93d7be7b289,PodSandboxId:d2f393550d9976fd2dcf4d23506dc7d61e5f66a72675c57f3361af4c85ed3a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725146951274541132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23f2dd07-1127-4280-ad49-1025c13f8df1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.452471243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c3a6198-7926-434c-98fa-575fe7b8d2d6 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.452567898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c3a6198-7926-434c-98fa-575fe7b8d2d6 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.456694804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6532f811-2365-45ca-b06d-e753b349ef88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.458157481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725146964458077867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6532f811-2365-45ca-b06d-e753b349ef88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.459400094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc962c21-a2be-45e8-82da-a0879e1ad1d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.459457312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc962c21-a2be-45e8-82da-a0879e1ad1d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.459676719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c500d97b0d8ed00b2ed43f3a298c5e6611f7b6eb7fd6e09675e7e94795a6e86e,PodSandboxId:bf1da61584bad674a2b7b5add83ec07c29b4214afddbbe8551d2cb522e4d7244,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725146957122803260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8b9ca065c19ac6f56cc3c089165efa2b6d8536aa486b09f24f9c0babc967def,PodSandboxId:9adb718c41f307036ff3603bcab1040017569f6dad2a9f37b0d25d4c2a592328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725146957131979165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269e043f658930162c6a6695f8d8714c0aa3229caff859c75f043abaf300c24,PodSandboxId:4231cd275380a3f59dedc9c303d689dee073f0352a91b52754a63cf393387602,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725146957103077560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552b905e032eef48d9ce4fc61c3a99bfe504301a9fdc3c021ce965fc679c06b4,PodSandboxId:a73e99c8ebbecc927ed993de21653d23cd715e614e7dcf527c97a60046ebc8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725146957100480938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cca34c9a99a44fee7c37476020c5e5ebabab85ea42f6be6b55a1daa0a23ad8f,PodSandboxId:0152d96511d5f00307ba9e99d1ea29b16eb2fd88964d1a92a682c9bf42e03e92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725146951420983470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33224bae77e04c8fdbf715fb6914260b6d8cc5445c7c34bb76d5b9b2ff0a55df,PodSandboxId:c00dd4e4362bb758827e943477840af885d4db02be9a532d81a30dde29b44837,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725146951366164049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fa147bb25e20e6e45b5023afb68ea1a7294a329cea308e5ccac8afd9e093ed,PodSandboxId:292695f6da67cf20aba8fd3b1c7e3a06568ee5e2d6a86040ced22b69281db795,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725146951364407744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c611b6316aa0347befd604f2b958d607824c1c8e6b6711fed197e93d7be7b289,PodSandboxId:d2f393550d9976fd2dcf4d23506dc7d61e5f66a72675c57f3361af4c85ed3a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725146951274541132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc962c21-a2be-45e8-82da-a0879e1ad1d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.513886316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78c415f9-3498-4929-a404-8d11f1d90d5d name=/runtime.v1.RuntimeService/Version
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.513989127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78c415f9-3498-4929-a404-8d11f1d90d5d name=/runtime.v1.RuntimeService/Version
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.518605792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=947f2f46-34b4-4f24-8575-1caa556f6a1f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.519003766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725146964518978396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=947f2f46-34b4-4f24-8575-1caa556f6a1f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.519672636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f9deded-65db-440e-a178-96912fa99ea8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.519753494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f9deded-65db-440e-a178-96912fa99ea8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:29:24 kubernetes-upgrade-828713 crio[1865]: time="2024-08-31 23:29:24.519950761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c500d97b0d8ed00b2ed43f3a298c5e6611f7b6eb7fd6e09675e7e94795a6e86e,PodSandboxId:bf1da61584bad674a2b7b5add83ec07c29b4214afddbbe8551d2cb522e4d7244,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725146957122803260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8b9ca065c19ac6f56cc3c089165efa2b6d8536aa486b09f24f9c0babc967def,PodSandboxId:9adb718c41f307036ff3603bcab1040017569f6dad2a9f37b0d25d4c2a592328,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725146957131979165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269e043f658930162c6a6695f8d8714c0aa3229caff859c75f043abaf300c24,PodSandboxId:4231cd275380a3f59dedc9c303d689dee073f0352a91b52754a63cf393387602,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725146957103077560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552b905e032eef48d9ce4fc61c3a99bfe504301a9fdc3c021ce965fc679c06b4,PodSandboxId:a73e99c8ebbecc927ed993de21653d23cd715e614e7dcf527c97a60046ebc8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725146957100480938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cca34c9a99a44fee7c37476020c5e5ebabab85ea42f6be6b55a1daa0a23ad8f,PodSandboxId:0152d96511d5f00307ba9e99d1ea29b16eb2fd88964d1a92a682c9bf42e03e92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725146951420983470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b57ff91a4daba50d818595768b1f22,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33224bae77e04c8fdbf715fb6914260b6d8cc5445c7c34bb76d5b9b2ff0a55df,PodSandboxId:c00dd4e4362bb758827e943477840af885d4db02be9a532d81a30dde29b44837,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725146951366164049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb020d9cb88cab02953c6ff70442c3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fa147bb25e20e6e45b5023afb68ea1a7294a329cea308e5ccac8afd9e093ed,PodSandboxId:292695f6da67cf20aba8fd3b1c7e3a06568ee5e2d6a86040ced22b69281db795,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725146951364407744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde3dab0547da7fd71af298a165ec9d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c611b6316aa0347befd604f2b958d607824c1c8e6b6711fed197e93d7be7b289,PodSandboxId:d2f393550d9976fd2dcf4d23506dc7d61e5f66a72675c57f3361af4c85ed3a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725146951274541132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-828713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bf52316653f7055b8635851e83f6f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f9deded-65db-440e-a178-96912fa99ea8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8b9ca065c19a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago       Running             kube-scheduler            2                   9adb718c41f30       kube-scheduler-kubernetes-upgrade-828713
	c500d97b0d8ed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   bf1da61584bad       etcd-kubernetes-upgrade-828713
	6269e043f6589       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago       Running             kube-controller-manager   2                   4231cd275380a       kube-controller-manager-kubernetes-upgrade-828713
	552b905e032ee       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            2                   a73e99c8ebbec       kube-apiserver-kubernetes-upgrade-828713
	0cca34c9a99a4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   13 seconds ago      Exited              etcd                      1                   0152d96511d5f       etcd-kubernetes-upgrade-828713
	33224bae77e04       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   13 seconds ago      Exited              kube-controller-manager   1                   c00dd4e4362bb       kube-controller-manager-kubernetes-upgrade-828713
	a8fa147bb25e2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   13 seconds ago      Exited              kube-apiserver            1                   292695f6da67c       kube-apiserver-kubernetes-upgrade-828713
	c611b6316aa03       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   13 seconds ago      Exited              kube-scheduler            1                   d2f393550d997       kube-scheduler-kubernetes-upgrade-828713
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-828713
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-828713
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 23:29:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-828713
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:29:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:29:21 +0000   Sat, 31 Aug 2024 23:29:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:29:21 +0000   Sat, 31 Aug 2024 23:29:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:29:21 +0000   Sat, 31 Aug 2024 23:29:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:29:21 +0000   Sat, 31 Aug 2024 23:29:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.109
	  Hostname:    kubernetes-upgrade-828713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6cdba5da32644fc49418990c5ae582f2
	  System UUID:                6cdba5da-3264-4fc4-9418-990c5ae582f2
	  Boot ID:                    b5c73bd9-6507-4297-8a1c-c0ac014fc465
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 kube-apiserver-kubernetes-upgrade-828713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-828713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-scheduler-kubernetes-upgrade-828713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                550m (27%)  0 (0%)
	  memory             0 (0%)      0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 23s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  23s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s (x8 over 23s)  kubelet  Node kubernetes-upgrade-828713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 23s)  kubelet  Node kubernetes-upgrade-828713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 23s)  kubelet  Node kubernetes-upgrade-828713 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-828713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-828713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-828713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +2.706831] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619257] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.549470] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.065586] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072129] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.185309] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.150011] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.303665] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +4.365813] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +0.069405] kauditd_printk_skb: 130 callbacks suppressed
	[Aug31 23:29] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +6.469087] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.105038] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.641666] systemd-fstab-generator[1787]: Ignoring "noauto" option for root device
	[  +0.228313] systemd-fstab-generator[1801]: Ignoring "noauto" option for root device
	[  +0.275779] systemd-fstab-generator[1815]: Ignoring "noauto" option for root device
	[  +0.202501] systemd-fstab-generator[1827]: Ignoring "noauto" option for root device
	[  +0.397262] systemd-fstab-generator[1855]: Ignoring "noauto" option for root device
	[  +0.892261] systemd-fstab-generator[2047]: Ignoring "noauto" option for root device
	[  +0.080147] kauditd_printk_skb: 198 callbacks suppressed
	[  +2.376006] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	[  +6.221947] systemd-fstab-generator[2598]: Ignoring "noauto" option for root device
	[  +0.090849] kauditd_printk_skb: 97 callbacks suppressed
	
	
	==> etcd [0cca34c9a99a44fee7c37476020c5e5ebabab85ea42f6be6b55a1daa0a23ad8f] <==
	{"level":"info","ts":"2024-08-31T23:29:11.918789Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-31T23:29:11.969596Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d0e6cadbc325cfac","local-member-id":"46a65bd61cd538c0","commit-index":282}
	{"level":"info","ts":"2024-08-31T23:29:11.969883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-31T23:29:11.969934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 became follower at term 2"}
	{"level":"info","ts":"2024-08-31T23:29:11.969947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 46a65bd61cd538c0 [peers: [], term: 2, commit: 282, applied: 0, lastindex: 282, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-31T23:29:11.977306Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-31T23:29:11.991337Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":275}
	{"level":"info","ts":"2024-08-31T23:29:11.996272Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-31T23:29:12.010739Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"46a65bd61cd538c0","timeout":"7s"}
	{"level":"info","ts":"2024-08-31T23:29:12.012956Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"46a65bd61cd538c0"}
	{"level":"info","ts":"2024-08-31T23:29:12.013429Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"46a65bd61cd538c0","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-31T23:29:12.014794Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-31T23:29:12.044389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-31T23:29:12.045281Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-31T23:29:12.078364Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-31T23:29:12.051821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 switched to configuration voters=(5090857403953789120)"}
	{"level":"info","ts":"2024-08-31T23:29:12.078501Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d0e6cadbc325cfac","local-member-id":"46a65bd61cd538c0","added-peer-id":"46a65bd61cd538c0","added-peer-peer-urls":["https://192.168.50.109:2380"]}
	{"level":"info","ts":"2024-08-31T23:29:12.078590Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d0e6cadbc325cfac","local-member-id":"46a65bd61cd538c0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:29:12.078615Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:29:12.078313Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:29:12.119650Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.109:2380"}
	{"level":"info","ts":"2024-08-31T23:29:12.119677Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.109:2380"}
	{"level":"info","ts":"2024-08-31T23:29:12.119536Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-31T23:29:12.136031Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"46a65bd61cd538c0","initial-advertise-peer-urls":["https://192.168.50.109:2380"],"listen-peer-urls":["https://192.168.50.109:2380"],"advertise-client-urls":["https://192.168.50.109:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.109:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T23:29:12.136091Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [c500d97b0d8ed00b2ed43f3a298c5e6611f7b6eb7fd6e09675e7e94795a6e86e] <==
	{"level":"info","ts":"2024-08-31T23:29:17.607890Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d0e6cadbc325cfac","local-member-id":"46a65bd61cd538c0","added-peer-id":"46a65bd61cd538c0","added-peer-peer-urls":["https://192.168.50.109:2380"]}
	{"level":"info","ts":"2024-08-31T23:29:17.607985Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d0e6cadbc325cfac","local-member-id":"46a65bd61cd538c0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:29:17.608026Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:29:17.612805Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:29:17.628006Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-31T23:29:17.632546Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"46a65bd61cd538c0","initial-advertise-peer-urls":["https://192.168.50.109:2380"],"listen-peer-urls":["https://192.168.50.109:2380"],"advertise-client-urls":["https://192.168.50.109:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.109:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T23:29:17.632602Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T23:29:17.632751Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.109:2380"}
	{"level":"info","ts":"2024-08-31T23:29:17.632786Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.109:2380"}
	{"level":"info","ts":"2024-08-31T23:29:19.453798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-31T23:29:19.453869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-31T23:29:19.453890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 received MsgPreVoteResp from 46a65bd61cd538c0 at term 2"}
	{"level":"info","ts":"2024-08-31T23:29:19.453902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 became candidate at term 3"}
	{"level":"info","ts":"2024-08-31T23:29:19.453908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 received MsgVoteResp from 46a65bd61cd538c0 at term 3"}
	{"level":"info","ts":"2024-08-31T23:29:19.453936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 became leader at term 3"}
	{"level":"info","ts":"2024-08-31T23:29:19.453946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46a65bd61cd538c0 elected leader 46a65bd61cd538c0 at term 3"}
	{"level":"info","ts":"2024-08-31T23:29:19.461379Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:29:19.462154Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:29:19.461332Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"46a65bd61cd538c0","local-member-attributes":"{Name:kubernetes-upgrade-828713 ClientURLs:[https://192.168.50.109:2379]}","request-path":"/0/members/46a65bd61cd538c0/attributes","cluster-id":"d0e6cadbc325cfac","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T23:29:19.462597Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:29:19.462935Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.109:2379"}
	{"level":"info","ts":"2024-08-31T23:29:19.462958Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T23:29:19.463151Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T23:29:19.463845Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:29:19.464615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:29:24 up 0 min,  0 users,  load average: 1.40, 0.36, 0.12
	Linux kubernetes-upgrade-828713 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [552b905e032eef48d9ce4fc61c3a99bfe504301a9fdc3c021ce965fc679c06b4] <==
	I0831 23:29:20.937857       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0831 23:29:21.023591       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 23:29:21.023723       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 23:29:21.023793       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 23:29:21.034567       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:29:21.034698       1 policy_source.go:224] refreshing policies
	I0831 23:29:21.035626       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:29:21.038317       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 23:29:21.038402       1 aggregator.go:171] initial CRD sync complete...
	I0831 23:29:21.038441       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 23:29:21.038449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 23:29:21.038454       1 cache.go:39] Caches are synced for autoregister controller
	E0831 23:29:21.060869       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0831 23:29:21.099744       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 23:29:21.099823       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 23:29:21.101175       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 23:29:21.102780       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 23:29:21.108252       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 23:29:21.113561       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 23:29:21.905445       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0831 23:29:22.313994       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0831 23:29:22.344018       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0831 23:29:22.402681       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0831 23:29:22.500041       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0831 23:29:22.508476       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [a8fa147bb25e20e6e45b5023afb68ea1a7294a329cea308e5ccac8afd9e093ed] <==
	I0831 23:29:11.741175       1 options.go:228] external host was not specified, using 192.168.50.109
	I0831 23:29:11.743045       1 server.go:142] Version: v1.31.0
	I0831 23:29:11.743085       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:29:12.694654       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0831 23:29:12.743694       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:29:12.751084       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0831 23:29:12.751163       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0831 23:29:12.751451       1 instance.go:232] Using reconciler: lease
	
	
	==> kube-controller-manager [33224bae77e04c8fdbf715fb6914260b6d8cc5445c7c34bb76d5b9b2ff0a55df] <==
	I0831 23:29:12.996915       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [6269e043f658930162c6a6695f8d8714c0aa3229caff859c75f043abaf300c24] <==
	I0831 23:29:24.645622       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0831 23:29:24.645638       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I0831 23:29:24.645892       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0831 23:29:24.645907       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0831 23:29:24.645930       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0831 23:29:24.725593       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I0831 23:29:24.725724       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0831 23:29:24.725784       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0831 23:29:24.725795       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0831 23:29:24.770746       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I0831 23:29:24.770952       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0831 23:29:24.771292       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0831 23:29:24.771383       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0831 23:29:24.772236       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	E0831 23:29:24.820496       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0831 23:29:24.820540       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0831 23:29:24.872306       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I0831 23:29:24.872453       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0831 23:29:24.872488       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0831 23:29:24.921063       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I0831 23:29:24.921310       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0831 23:29:24.921340       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0831 23:29:25.073380       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I0831 23:29:25.073572       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0831 23:29:25.073601       1 shared_informer.go:313] Waiting for caches to sync for deployment
	
	
	==> kube-scheduler [a8b9ca065c19ac6f56cc3c089165efa2b6d8536aa486b09f24f9c0babc967def] <==
	I0831 23:29:18.378094       1 serving.go:386] Generated self-signed cert in-memory
	W0831 23:29:20.957340       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0831 23:29:20.957567       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 23:29:20.957815       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0831 23:29:20.958354       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0831 23:29:21.035874       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0831 23:29:21.036898       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:29:21.041626       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0831 23:29:21.041709       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 23:29:21.042674       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0831 23:29:21.042797       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0831 23:29:21.142747       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c611b6316aa0347befd604f2b958d607824c1c8e6b6711fed197e93d7be7b289] <==
	
	
	==> kubelet <==
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.808691    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/21b57ff91a4daba50d818595768b1f22-etcd-certs\") pod \"etcd-kubernetes-upgrade-828713\" (UID: \"21b57ff91a4daba50d818595768b1f22\") " pod="kube-system/etcd-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809045    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bde3dab0547da7fd71af298a165ec9d1-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-828713\" (UID: \"bde3dab0547da7fd71af298a165ec9d1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809285    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb020d9cb88cab02953c6ff70442c3b6-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-828713\" (UID: \"cb020d9cb88cab02953c6ff70442c3b6\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809432    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cb020d9cb88cab02953c6ff70442c3b6-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-828713\" (UID: \"cb020d9cb88cab02953c6ff70442c3b6\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809450    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb020d9cb88cab02953c6ff70442c3b6-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-828713\" (UID: \"cb020d9cb88cab02953c6ff70442c3b6\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809741    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb020d9cb88cab02953c6ff70442c3b6-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-828713\" (UID: \"cb020d9cb88cab02953c6ff70442c3b6\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809865    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1bf52316653f7055b8635851e83f6f5-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-828713\" (UID: \"c1bf52316653f7055b8635851e83f6f5\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809883    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/21b57ff91a4daba50d818595768b1f22-etcd-data\") pod \"etcd-kubernetes-upgrade-828713\" (UID: \"21b57ff91a4daba50d818595768b1f22\") " pod="kube-system/etcd-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.809899    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bde3dab0547da7fd71af298a165ec9d1-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-828713\" (UID: \"bde3dab0547da7fd71af298a165ec9d1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.810103    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bde3dab0547da7fd71af298a165ec9d1-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-828713\" (UID: \"bde3dab0547da7fd71af298a165ec9d1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.810135    2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb020d9cb88cab02953c6ff70442c3b6-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-828713\" (UID: \"cb020d9cb88cab02953c6ff70442c3b6\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:16.986549    2322 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-828713"
	Aug 31 23:29:16 kubernetes-upgrade-828713 kubelet[2322]: E0831 23:29:16.987703    2322 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.109:8443: connect: connection refused" node="kubernetes-upgrade-828713"
	Aug 31 23:29:17 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:17.072121    2322 scope.go:117] "RemoveContainer" containerID="a8fa147bb25e20e6e45b5023afb68ea1a7294a329cea308e5ccac8afd9e093ed"
	Aug 31 23:29:17 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:17.072439    2322 scope.go:117] "RemoveContainer" containerID="0cca34c9a99a44fee7c37476020c5e5ebabab85ea42f6be6b55a1daa0a23ad8f"
	Aug 31 23:29:17 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:17.074283    2322 scope.go:117] "RemoveContainer" containerID="c611b6316aa0347befd604f2b958d607824c1c8e6b6711fed197e93d7be7b289"
	Aug 31 23:29:17 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:17.075030    2322 scope.go:117] "RemoveContainer" containerID="33224bae77e04c8fdbf715fb6914260b6d8cc5445c7c34bb76d5b9b2ff0a55df"
	Aug 31 23:29:17 kubernetes-upgrade-828713 kubelet[2322]: E0831 23:29:17.200237    2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-828713?timeout=10s\": dial tcp 192.168.50.109:8443: connect: connection refused" interval="800ms"
	Aug 31 23:29:17 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:17.388669    2322 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-828713"
	Aug 31 23:29:17 kubernetes-upgrade-828713 kubelet[2322]: E0831 23:29:17.389486    2322 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.109:8443: connect: connection refused" node="kubernetes-upgrade-828713"
	Aug 31 23:29:18 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:18.192167    2322 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-828713"
	Aug 31 23:29:21 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:21.069128    2322 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-828713"
	Aug 31 23:29:21 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:21.069312    2322 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-828713"
	Aug 31 23:29:21 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:21.576074    2322 apiserver.go:52] "Watching apiserver"
	Aug 31 23:29:21 kubernetes-upgrade-828713 kubelet[2322]: I0831 23:29:21.601983    2322 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-828713 -n kubernetes-upgrade-828713
helpers_test.go:262: (dbg) Run:  kubectl --context kubernetes-upgrade-828713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: kube-proxy-v2ncb storage-provisioner
helpers_test.go:275: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context kubernetes-upgrade-828713 describe pod kube-proxy-v2ncb storage-provisioner
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-828713 describe pod kube-proxy-v2ncb storage-provisioner: exit status 1 (71.628543ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-proxy-v2ncb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context kubernetes-upgrade-828713 describe pod kube-proxy-v2ncb storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "kubernetes-upgrade-828713" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-828713
--- FAIL: TestKubernetesUpgrade (363.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (836.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-945775 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-945775 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m55.556356345s)

                                                
                                                
-- stdout --
	* [pause-945775] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-945775" primary control-plane node in "pause-945775" cluster
	* Updating the running kvm2 "pause-945775" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:28:18.551044   64902 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:28:18.551164   64902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:28:18.551175   64902 out.go:358] Setting ErrFile to fd 2...
	I0831 23:28:18.551182   64902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:28:18.551416   64902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:28:18.551936   64902 out.go:352] Setting JSON to false
	I0831 23:28:18.552845   64902 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7846,"bootTime":1725139053,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 23:28:18.552905   64902 start.go:139] virtualization: kvm guest
	I0831 23:28:18.555241   64902 out.go:177] * [pause-945775] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 23:28:18.556805   64902 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:28:18.556827   64902 notify.go:220] Checking for updates...
	I0831 23:28:18.559562   64902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:28:18.560982   64902 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:28:18.562548   64902 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:28:18.564112   64902 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 23:28:18.565597   64902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:28:18.567611   64902 config.go:182] Loaded profile config "pause-945775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:28:18.567991   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:28:18.568034   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:28:18.582852   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0831 23:28:18.583267   64902 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:28:18.583908   64902 main.go:141] libmachine: Using API Version  1
	I0831 23:28:18.583930   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:28:18.584276   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:28:18.584457   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:18.584691   64902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:28:18.584973   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:28:18.585004   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:28:18.600347   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45317
	I0831 23:28:18.600782   64902 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:28:18.601292   64902 main.go:141] libmachine: Using API Version  1
	I0831 23:28:18.601319   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:28:18.601715   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:28:18.601905   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:18.640769   64902 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 23:28:18.642227   64902 start.go:297] selected driver: kvm2
	I0831 23:28:18.642240   64902 start.go:901] validating driver "kvm2" against &{Name:pause-945775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-945775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:28:18.642377   64902 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:28:18.642701   64902 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:28:18.642787   64902 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 23:28:18.659428   64902 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 23:28:18.660171   64902 cni.go:84] Creating CNI manager for ""
	I0831 23:28:18.660190   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 23:28:18.660254   64902 start.go:340] cluster config:
	{Name:pause-945775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-945775 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:28:18.660407   64902 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:28:18.663121   64902 out.go:177] * Starting "pause-945775" primary control-plane node in "pause-945775" cluster
	I0831 23:28:18.664621   64902 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:28:18.664657   64902 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 23:28:18.664664   64902 cache.go:56] Caching tarball of preloaded images
	I0831 23:28:18.664742   64902 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 23:28:18.664752   64902 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:28:18.664846   64902 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/config.json ...
	I0831 23:28:18.665016   64902 start.go:360] acquireMachinesLock for pause-945775: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 23:28:19.928353   64902 start.go:364] duration metric: took 1.263301006s to acquireMachinesLock for "pause-945775"
	I0831 23:28:19.928394   64902 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:28:19.928400   64902 fix.go:54] fixHost starting: 
	I0831 23:28:19.928846   64902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:28:19.928897   64902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:28:19.946065   64902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35307
	I0831 23:28:19.946480   64902 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:28:19.946971   64902 main.go:141] libmachine: Using API Version  1
	I0831 23:28:19.946990   64902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:28:19.947312   64902 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:28:19.947552   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:19.947700   64902 main.go:141] libmachine: (pause-945775) Calling .GetState
	I0831 23:28:19.949244   64902 fix.go:112] recreateIfNeeded on pause-945775: state=Running err=<nil>
	W0831 23:28:19.949273   64902 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:28:19.951138   64902 out.go:177] * Updating the running kvm2 "pause-945775" VM ...
	I0831 23:28:19.952549   64902 machine.go:93] provisionDockerMachine start ...
	I0831 23:28:19.952574   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:19.952783   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:19.955753   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:19.956319   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:19.956343   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:19.956583   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:19.956722   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:19.956877   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:19.957005   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:19.957191   64902 main.go:141] libmachine: Using SSH client type: native
	I0831 23:28:19.957410   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.125 22 <nil> <nil>}
	I0831 23:28:19.957429   64902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:28:20.059929   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-945775
	
	I0831 23:28:20.059959   64902 main.go:141] libmachine: (pause-945775) Calling .GetMachineName
	I0831 23:28:20.060235   64902 buildroot.go:166] provisioning hostname "pause-945775"
	I0831 23:28:20.060264   64902 main.go:141] libmachine: (pause-945775) Calling .GetMachineName
	I0831 23:28:20.060467   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:20.063526   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.063874   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:20.063902   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.064043   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:20.064221   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:20.064357   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:20.064503   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:20.064666   64902 main.go:141] libmachine: Using SSH client type: native
	I0831 23:28:20.064845   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.125 22 <nil> <nil>}
	I0831 23:28:20.064861   64902 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-945775 && echo "pause-945775" | sudo tee /etc/hostname
	I0831 23:28:20.193661   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-945775
	
	I0831 23:28:20.193712   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:20.196820   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.197244   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:20.197270   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.197444   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:20.197654   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:20.197839   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:20.197988   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:20.198175   64902 main.go:141] libmachine: Using SSH client type: native
	I0831 23:28:20.198404   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.125 22 <nil> <nil>}
	I0831 23:28:20.198429   64902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-945775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-945775/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-945775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:28:20.308218   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:28:20.308247   64902 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18943-13149/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-13149/.minikube}
	I0831 23:28:20.308292   64902 buildroot.go:174] setting up certificates
	I0831 23:28:20.308305   64902 provision.go:84] configureAuth start
	I0831 23:28:20.308316   64902 main.go:141] libmachine: (pause-945775) Calling .GetMachineName
	I0831 23:28:20.308603   64902 main.go:141] libmachine: (pause-945775) Calling .GetIP
	I0831 23:28:20.311201   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.311538   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:20.311565   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.311711   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:20.313764   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.314180   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:20.314206   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.314321   64902 provision.go:143] copyHostCerts
	I0831 23:28:20.314378   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem, removing ...
	I0831 23:28:20.314395   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem
	I0831 23:28:20.314455   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/ca.pem (1082 bytes)
	I0831 23:28:20.314541   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem, removing ...
	I0831 23:28:20.314548   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem
	I0831 23:28:20.314568   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/cert.pem (1123 bytes)
	I0831 23:28:20.314620   64902 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem, removing ...
	I0831 23:28:20.314626   64902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem
	I0831 23:28:20.314642   64902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-13149/.minikube/key.pem (1675 bytes)
	I0831 23:28:20.314727   64902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem org=jenkins.pause-945775 san=[127.0.0.1 192.168.83.125 localhost minikube pause-945775]
	I0831 23:28:20.637862   64902 provision.go:177] copyRemoteCerts
	I0831 23:28:20.637917   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:28:20.637938   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:20.640550   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.640891   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:20.640920   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.641102   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:20.641256   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:20.641421   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:20.641565   64902 sshutil.go:53] new ssh client: &{IP:192.168.83.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/pause-945775/id_rsa Username:docker}
	I0831 23:28:20.722558   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 23:28:20.748873   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:28:20.776491   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0831 23:28:20.811295   64902 provision.go:87] duration metric: took 502.976603ms to configureAuth
	I0831 23:28:20.811345   64902 buildroot.go:189] setting minikube options for container-runtime
	I0831 23:28:20.811550   64902 config.go:182] Loaded profile config "pause-945775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:28:20.811645   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:20.814680   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.815104   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:20.815132   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:20.815365   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:20.815577   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:20.815735   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:20.815903   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:20.816105   64902 main.go:141] libmachine: Using SSH client type: native
	I0831 23:28:20.816322   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.125 22 <nil> <nil>}
	I0831 23:28:20.816346   64902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:28:29.081359   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:28:29.081397   64902 machine.go:96] duration metric: took 9.128829311s to provisionDockerMachine
	I0831 23:28:29.081409   64902 start.go:293] postStartSetup for "pause-945775" (driver="kvm2")
	I0831 23:28:29.081421   64902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:28:29.081443   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:29.081843   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:28:29.081870   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:29.084642   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.085021   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:29.085046   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.085206   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:29.085402   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:29.085557   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:29.085834   64902 sshutil.go:53] new ssh client: &{IP:192.168.83.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/pause-945775/id_rsa Username:docker}
	I0831 23:28:29.166028   64902 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:28:29.170882   64902 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 23:28:29.170907   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/addons for local assets ...
	I0831 23:28:29.170973   64902 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-13149/.minikube/files for local assets ...
	I0831 23:28:29.171068   64902 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem -> 203692.pem in /etc/ssl/certs
	I0831 23:28:29.171178   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:28:29.180264   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:28:29.209045   64902 start.go:296] duration metric: took 127.621516ms for postStartSetup
	I0831 23:28:29.209084   64902 fix.go:56] duration metric: took 9.280683314s for fixHost
	I0831 23:28:29.209107   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:29.211731   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.212124   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:29.212153   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.212334   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:29.212538   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:29.212695   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:29.212829   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:29.213000   64902 main.go:141] libmachine: Using SSH client type: native
	I0831 23:28:29.213170   64902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.125 22 <nil> <nil>}
	I0831 23:28:29.213183   64902 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 23:28:29.438468   64902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725146909.418942303
	
	I0831 23:28:29.438492   64902 fix.go:216] guest clock: 1725146909.418942303
	I0831 23:28:29.438503   64902 fix.go:229] Guest: 2024-08-31 23:28:29.418942303 +0000 UTC Remote: 2024-08-31 23:28:29.209089292 +0000 UTC m=+10.695182321 (delta=209.853011ms)
	I0831 23:28:29.438552   64902 fix.go:200] guest clock delta is within tolerance: 209.853011ms
	I0831 23:28:29.438563   64902 start.go:83] releasing machines lock for "pause-945775", held for 9.510188729s
	I0831 23:28:29.438592   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:29.438872   64902 main.go:141] libmachine: (pause-945775) Calling .GetIP
	I0831 23:28:29.441634   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.442041   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:29.442073   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.442241   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:29.442712   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:29.442881   64902 main.go:141] libmachine: (pause-945775) Calling .DriverName
	I0831 23:28:29.442933   64902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:28:29.442982   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:29.443043   64902 ssh_runner.go:195] Run: cat /version.json
	I0831 23:28:29.443060   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHHostname
	I0831 23:28:29.445748   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.445998   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.446142   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:29.446190   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.446432   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:29.446447   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:28:29.446493   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:28:29.446572   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:29.446588   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHPort
	I0831 23:28:29.446762   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:29.446762   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHKeyPath
	I0831 23:28:29.446916   64902 sshutil.go:53] new ssh client: &{IP:192.168.83.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/pause-945775/id_rsa Username:docker}
	I0831 23:28:29.447003   64902 main.go:141] libmachine: (pause-945775) Calling .GetSSHUsername
	I0831 23:28:29.447143   64902 sshutil.go:53] new ssh client: &{IP:192.168.83.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/pause-945775/id_rsa Username:docker}
	I0831 23:28:29.607794   64902 ssh_runner.go:195] Run: systemctl --version
	I0831 23:28:29.733711   64902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:28:29.977252   64902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 23:28:30.020976   64902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 23:28:30.021061   64902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:28:30.055889   64902 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:28:30.055919   64902 start.go:495] detecting cgroup driver to use...
	I0831 23:28:30.055988   64902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:28:30.155317   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:28:30.190709   64902 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:28:30.190773   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:28:30.216664   64902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:28:30.233296   64902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:28:30.445216   64902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:28:30.644379   64902 docker.go:233] disabling docker service ...
	I0831 23:28:30.644477   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:28:30.686557   64902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:28:30.708358   64902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:28:30.884686   64902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:28:31.044272   64902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:28:31.060538   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:28:31.081001   64902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:28:31.081074   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:28:31.093598   64902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:28:31.093672   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:28:31.105257   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:28:31.119952   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:28:31.130310   64902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:28:31.144941   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:28:31.158342   64902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:28:31.177811   64902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:28:31.189427   64902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:28:31.199232   64902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:28:31.211671   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:28:31.392823   64902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:30:01.711622   64902 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.318758957s)
	I0831 23:30:01.711655   64902 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:30:01.711716   64902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:30:01.717548   64902 start.go:563] Will wait 60s for crictl version
	I0831 23:30:01.717602   64902 ssh_runner.go:195] Run: which crictl
	I0831 23:30:01.722217   64902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:30:01.771890   64902 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0831 23:30:01.771976   64902 ssh_runner.go:195] Run: crio --version
	I0831 23:30:01.804668   64902 ssh_runner.go:195] Run: crio --version
	I0831 23:30:01.847084   64902 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0831 23:30:01.848476   64902 main.go:141] libmachine: (pause-945775) Calling .GetIP
	I0831 23:30:01.851716   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:30:01.852110   64902 main.go:141] libmachine: (pause-945775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a8:88", ip: ""} in network mk-pause-945775: {Iface:virbr1 ExpiryTime:2024-09-01 00:27:44 +0000 UTC Type:0 Mac:52:54:00:41:a8:88 Iaid: IPaddr:192.168.83.125 Prefix:24 Hostname:pause-945775 Clientid:01:52:54:00:41:a8:88}
	I0831 23:30:01.852151   64902 main.go:141] libmachine: (pause-945775) DBG | domain pause-945775 has defined IP address 192.168.83.125 and MAC address 52:54:00:41:a8:88 in network mk-pause-945775
	I0831 23:30:01.852447   64902 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0831 23:30:01.858238   64902 kubeadm.go:883] updating cluster {Name:pause-945775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-945775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 23:30:01.858439   64902 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:30:01.858521   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:30:01.912497   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:30:01.912526   64902 crio.go:433] Images already preloaded, skipping extraction
	I0831 23:30:01.912584   64902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:30:01.956998   64902 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:30:01.957023   64902 cache_images.go:84] Images are preloaded, skipping loading
	I0831 23:30:01.957033   64902 kubeadm.go:934] updating node { 192.168.83.125 8443 v1.31.0 crio true true} ...
	I0831 23:30:01.957179   64902 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-945775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-945775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:30:01.957281   64902 ssh_runner.go:195] Run: crio config
	I0831 23:30:02.012000   64902 cni.go:84] Creating CNI manager for ""
	I0831 23:30:02.012023   64902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 23:30:02.012040   64902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 23:30:02.012070   64902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-945775 NodeName:pause-945775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 23:30:02.012268   64902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-945775"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 23:30:02.012345   64902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:30:02.023821   64902 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:30:02.023888   64902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 23:30:02.038553   64902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 23:30:02.063548   64902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:30:02.089317   64902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0831 23:30:02.119008   64902 ssh_runner.go:195] Run: grep 192.168.83.125	control-plane.minikube.internal$ /etc/hosts
	I0831 23:30:02.125927   64902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:30:02.280622   64902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:30:02.299415   64902 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775 for IP: 192.168.83.125
	I0831 23:30:02.299439   64902 certs.go:194] generating shared ca certs ...
	I0831 23:30:02.299462   64902 certs.go:226] acquiring lock for ca certs: {Name:mk6299ca821fca8d08b859998e864922182a3966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:30:02.299626   64902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key
	I0831 23:30:02.299672   64902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key
	I0831 23:30:02.299681   64902 certs.go:256] generating profile certs ...
	I0831 23:30:02.299778   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/client.key
	I0831 23:30:02.299850   64902 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/apiserver.key.1548c553
	I0831 23:30:02.299903   64902 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/proxy-client.key
	I0831 23:30:02.300041   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem (1338 bytes)
	W0831 23:30:02.300073   64902 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369_empty.pem, impossibly tiny 0 bytes
	I0831 23:30:02.300082   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 23:30:02.300110   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:30:02.300138   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:30:02.300167   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/certs/key.pem (1675 bytes)
	I0831 23:30:02.300230   64902 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem (1708 bytes)
	I0831 23:30:02.301077   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:30:02.334554   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:30:02.363390   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:30:02.391856   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:30:02.422079   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 23:30:02.453078   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 23:30:02.482968   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:30:02.512825   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/pause-945775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:30:02.542823   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/ssl/certs/203692.pem --> /usr/share/ca-certificates/203692.pem (1708 bytes)
	I0831 23:30:02.571309   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:30:02.598758   64902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-13149/.minikube/certs/20369.pem --> /usr/share/ca-certificates/20369.pem (1338 bytes)
	I0831 23:30:02.627228   64902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 23:30:02.653157   64902 ssh_runner.go:195] Run: openssl version
	I0831 23:30:02.663232   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/203692.pem && ln -fs /usr/share/ca-certificates/203692.pem /etc/ssl/certs/203692.pem"
	I0831 23:30:02.678223   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/203692.pem
	I0831 23:30:02.684596   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:24 /usr/share/ca-certificates/203692.pem
	I0831 23:30:02.684659   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/203692.pem
	I0831 23:30:02.693516   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/203692.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:30:02.705056   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:30:02.718806   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:30:02.724041   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:30:02.724120   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:30:02.730575   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:30:02.743765   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20369.pem && ln -fs /usr/share/ca-certificates/20369.pem /etc/ssl/certs/20369.pem"
	I0831 23:30:02.757564   64902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20369.pem
	I0831 23:30:02.762650   64902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:24 /usr/share/ca-certificates/20369.pem
	I0831 23:30:02.762722   64902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20369.pem
	I0831 23:30:02.769297   64902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20369.pem /etc/ssl/certs/51391683.0"
	I0831 23:30:02.781694   64902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:30:02.787183   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:30:02.794040   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:30:02.801163   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:30:02.807936   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:30:02.814798   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:30:02.821938   64902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:30:02.829054   64902 kubeadm.go:392] StartCluster: {Name:pause-945775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-945775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:30:02.829273   64902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 23:30:02.829350   64902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 23:30:02.880295   64902 cri.go:89] found id: "761a8c04a6ae0e75a16110720ddfbef2ab3fe7c0b926b6f20b01624e99f5e294"
	I0831 23:30:02.880323   64902 cri.go:89] found id: "644ad3ae72f03b0e2c4fec9033d148a7ba69a4abdc6b935a0008a35d5248ec94"
	I0831 23:30:02.880329   64902 cri.go:89] found id: "6ebac9b570cafb010a026ab577c1073b51ef349dad942a90023ad0bd806ed517"
	I0831 23:30:02.880334   64902 cri.go:89] found id: "3335914a66d938b426e1cf1aa4524c7217f4410c46d03d131a223fac8b4a06db"
	I0831 23:30:02.880338   64902 cri.go:89] found id: "29aec999f5bd531df98042bb069a8697ddda89c396504fe192d71d0bd0e8ca5d"
	I0831 23:30:02.880345   64902 cri.go:89] found id: "4c6828f84f5cc06f9ff7c824ee7461dae9aae814a0ccd125dd6e369a7e15ac4d"
	I0831 23:30:02.880350   64902 cri.go:89] found id: "3c5cbfa36af08287bbb26d7f24d65634fe682df5f7148ab9e0f79ed557c0c630"
	I0831 23:30:02.880354   64902 cri.go:89] found id: "dfc731626e7a312df08773937fdae5aae21c68be6dae89f6e77bd7d21ccfc349"
	I0831 23:30:02.880358   64902 cri.go:89] found id: "d984501267c057705a6e386b4c6d8b0eb6102325b23028930fe7cba85be9a113"
	I0831 23:30:02.880366   64902 cri.go:89] found id: "582c7fb6ae274c4be60793e7b1f2a2691c57a4d775f23c77b89651b119384631"
	I0831 23:30:02.880370   64902 cri.go:89] found id: ""
	I0831 23:30:02.880424   64902 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-945775 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio" : exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-945775 -n pause-945775
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-945775 -n pause-945775: exit status 2 (222.528588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-945775 logs -n 25
helpers_test.go:253: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-009399 sudo cat                              | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo                                  | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo                                  | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo                                  | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo cat                              | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo cat                              | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo                                  | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo                                  | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo                                  | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo find                             | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-009399 sudo crio                             | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-009399                                       | bridge-009399          | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:34 UTC |
	| start   | -p embed-certs-673135                                  | embed-certs-673135     | jenkins | v1.33.1 | 31 Aug 24 23:34 UTC | 31 Aug 24 23:35 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317102             | no-preload-317102      | jenkins | v1.33.1 | 31 Aug 24 23:35 UTC | 31 Aug 24 23:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-317102                                   | no-preload-317102      | jenkins | v1.33.1 | 31 Aug 24 23:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-673135            | embed-certs-673135     | jenkins | v1.33.1 | 31 Aug 24 23:35 UTC | 31 Aug 24 23:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-673135                                  | embed-certs-673135     | jenkins | v1.33.1 | 31 Aug 24 23:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-885101        | old-k8s-version-885101 | jenkins | v1.33.1 | 31 Aug 24 23:37 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317102                  | no-preload-317102      | jenkins | v1.33.1 | 31 Aug 24 23:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-317102                                   | no-preload-317102      | jenkins | v1.33.1 | 31 Aug 24 23:37 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-673135                 | embed-certs-673135     | jenkins | v1.33.1 | 31 Aug 24 23:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-673135                                  | embed-certs-673135     | jenkins | v1.33.1 | 31 Aug 24 23:38 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-885101                              | old-k8s-version-885101 | jenkins | v1.33.1 | 31 Aug 24 23:38 UTC | 31 Aug 24 23:38 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-885101             | old-k8s-version-885101 | jenkins | v1.33.1 | 31 Aug 24 23:38 UTC | 31 Aug 24 23:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-885101                              | old-k8s-version-885101 | jenkins | v1.33.1 | 31 Aug 24 23:38 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 23:38:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 23:38:51.713658   80252 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:38:51.713750   80252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:38:51.713757   80252 out.go:358] Setting ErrFile to fd 2...
	I0831 23:38:51.713761   80252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:38:51.713949   80252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:38:51.714469   80252 out.go:352] Setting JSON to false
	I0831 23:38:51.715392   80252 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8479,"bootTime":1725139053,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 23:38:51.715447   80252 start.go:139] virtualization: kvm guest
	I0831 23:38:51.717498   80252 out.go:177] * [old-k8s-version-885101] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 23:38:51.718718   80252 notify.go:220] Checking for updates...
	I0831 23:38:51.718747   80252 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:38:51.719927   80252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:38:51.721215   80252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:38:51.722473   80252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:38:51.723702   80252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 23:38:51.725015   80252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:38:51.726520   80252 config.go:182] Loaded profile config "old-k8s-version-885101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0831 23:38:51.726904   80252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:38:51.726946   80252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:38:51.742014   80252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0831 23:38:51.742412   80252 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:38:51.742867   80252 main.go:141] libmachine: Using API Version  1
	I0831 23:38:51.742886   80252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:38:51.743203   80252 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:38:51.743373   80252 main.go:141] libmachine: (old-k8s-version-885101) Calling .DriverName
	I0831 23:38:51.745319   80252 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0831 23:38:51.746604   80252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:38:51.746900   80252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:38:51.746936   80252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:38:51.761419   80252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0831 23:38:51.761791   80252 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:38:51.762208   80252 main.go:141] libmachine: Using API Version  1
	I0831 23:38:51.762227   80252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:38:51.762559   80252 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:38:51.762714   80252 main.go:141] libmachine: (old-k8s-version-885101) Calling .DriverName
	I0831 23:38:51.798261   80252 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 23:38:51.799482   80252 start.go:297] selected driver: kvm2
	I0831 23:38:51.799496   80252 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-885101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-885101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:38:51.799615   80252 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:38:51.800248   80252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:38:51.800334   80252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 23:38:51.815841   80252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 23:38:51.816224   80252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:38:51.816256   80252 cni.go:84] Creating CNI manager for ""
	I0831 23:38:51.816264   80252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 23:38:51.816300   80252 start.go:340] cluster config:
	{Name:old-k8s-version-885101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-885101 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:38:51.816407   80252 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:38:51.818302   80252 out.go:177] * Starting "old-k8s-version-885101" primary control-plane node in "old-k8s-version-885101" cluster
	I0831 23:38:51.819891   80252 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 23:38:51.819931   80252 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0831 23:38:51.819941   80252 cache.go:56] Caching tarball of preloaded images
	I0831 23:38:51.820044   80252 preload.go:172] Found /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0831 23:38:51.820061   80252 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0831 23:38:51.820184   80252 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/old-k8s-version-885101/config.json ...
	I0831 23:38:51.820422   80252 start.go:360] acquireMachinesLock for old-k8s-version-885101: {Name:mka77c7b948accb83744d97c952b8b1f88f84da7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 23:38:54.755565   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:38:57.827604   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:03.907486   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:06.979550   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:13.059568   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:16.131519   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:22.211558   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:25.283664   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:31.363608   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:34.435634   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:40.515554   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:43.587544   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:49.667570   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:52.739589   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:39:58.819591   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:01.891653   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:07.971573   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:11.043585   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:17.123588   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:20.195581   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:26.275574   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:29.347557   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:35.427600   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:38.499531   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:44.579609   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:47.651615   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:53.731510   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:40:56.803551   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:02.883624   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:05.955626   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:12.035581   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:15.107563   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:21.187581   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:24.259637   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:30.339630   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:33.411561   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:39.491532   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:42.563566   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:48.643578   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:51.715544   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:41:57.795569   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:42:00.867588   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:42:06.947559   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	I0831 23:42:13.023037   64902 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0831 23:42:13.023126   64902 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0831 23:42:13.024680   64902 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 23:42:13.024740   64902 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 23:42:13.024853   64902 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 23:42:13.024959   64902 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 23:42:13.025043   64902 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 23:42:13.025100   64902 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 23:42:13.026977   64902 out.go:235]   - Generating certificates and keys ...
	I0831 23:42:13.027065   64902 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 23:42:13.027153   64902 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 23:42:13.027245   64902 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0831 23:42:13.027302   64902 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0831 23:42:13.027405   64902 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0831 23:42:13.027482   64902 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0831 23:42:13.027554   64902 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0831 23:42:13.027666   64902 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0831 23:42:13.027743   64902 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0831 23:42:13.027823   64902 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0831 23:42:13.027877   64902 kubeadm.go:310] [certs] Using the existing "sa" key
	I0831 23:42:13.027946   64902 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 23:42:13.027995   64902 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 23:42:13.028040   64902 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 23:42:13.028113   64902 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 23:42:13.028194   64902 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 23:42:13.028278   64902 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 23:42:13.028350   64902 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 23:42:13.028411   64902 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 23:42:13.029867   64902 out.go:235]   - Booting up control plane ...
	I0831 23:42:13.029961   64902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 23:42:13.030079   64902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 23:42:13.030155   64902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 23:42:13.030257   64902 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 23:42:13.030355   64902 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 23:42:13.030389   64902 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 23:42:13.030543   64902 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 23:42:13.030689   64902 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 23:42:13.030762   64902 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.844291ms
	I0831 23:42:13.030836   64902 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 23:42:13.030888   64902 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000352839s
	I0831 23:42:13.030894   64902 kubeadm.go:310] 
	I0831 23:42:13.030930   64902 kubeadm.go:310] Unfortunately, an error has occurred:
	I0831 23:42:13.030959   64902 kubeadm.go:310] 	context deadline exceeded
	I0831 23:42:13.030965   64902 kubeadm.go:310] 
	I0831 23:42:13.030990   64902 kubeadm.go:310] This error is likely caused by:
	I0831 23:42:13.031038   64902 kubeadm.go:310] 	- The kubelet is not running
	I0831 23:42:13.031160   64902 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0831 23:42:13.031170   64902 kubeadm.go:310] 
	I0831 23:42:13.031277   64902 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0831 23:42:13.031307   64902 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0831 23:42:13.031372   64902 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0831 23:42:13.031384   64902 kubeadm.go:310] 
	I0831 23:42:13.031526   64902 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0831 23:42:13.031632   64902 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0831 23:42:13.031705   64902 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0831 23:42:13.031781   64902 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0831 23:42:13.031861   64902 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0831 23:42:13.032003   64902 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0831 23:42:13.032042   64902 kubeadm.go:394] duration metric: took 12m10.202997809s to StartCluster
	I0831 23:42:13.032085   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:42:13.032135   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:42:13.081171   64902 cri.go:89] found id: ""
	I0831 23:42:13.081195   64902 logs.go:276] 0 containers: []
	W0831 23:42:13.081203   64902 logs.go:278] No container was found matching "kube-apiserver"
	I0831 23:42:13.081209   64902 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:42:13.081264   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:42:13.122938   64902 cri.go:89] found id: "ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6"
	I0831 23:42:13.122958   64902 cri.go:89] found id: ""
	I0831 23:42:13.122965   64902 logs.go:276] 1 containers: [ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6]
	I0831 23:42:13.123008   64902 ssh_runner.go:195] Run: which crictl
	I0831 23:42:13.127250   64902 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:42:13.127309   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:42:13.162600   64902 cri.go:89] found id: ""
	I0831 23:42:13.162632   64902 logs.go:276] 0 containers: []
	W0831 23:42:13.162640   64902 logs.go:278] No container was found matching "coredns"
	I0831 23:42:13.162646   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:42:13.162693   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:42:13.196185   64902 cri.go:89] found id: "adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e"
	I0831 23:42:13.196210   64902 cri.go:89] found id: ""
	I0831 23:42:13.196219   64902 logs.go:276] 1 containers: [adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e]
	I0831 23:42:13.196263   64902 ssh_runner.go:195] Run: which crictl
	I0831 23:42:13.200301   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:42:13.200356   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:42:13.234703   64902 cri.go:89] found id: ""
	I0831 23:42:13.234729   64902 logs.go:276] 0 containers: []
	W0831 23:42:13.234739   64902 logs.go:278] No container was found matching "kube-proxy"
	I0831 23:42:13.234746   64902 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:42:13.234806   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:42:13.270615   64902 cri.go:89] found id: "30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754"
	I0831 23:42:13.270637   64902 cri.go:89] found id: "a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783"
	I0831 23:42:13.270641   64902 cri.go:89] found id: ""
	I0831 23:42:13.270648   64902 logs.go:276] 2 containers: [30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754 a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783]
	I0831 23:42:13.270692   64902 ssh_runner.go:195] Run: which crictl
	I0831 23:42:13.275010   64902 ssh_runner.go:195] Run: which crictl
	I0831 23:42:13.278919   64902 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:42:13.278976   64902 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:42:13.319101   64902 cri.go:89] found id: ""
	I0831 23:42:13.319129   64902 logs.go:276] 0 containers: []
	W0831 23:42:13.319138   64902 logs.go:278] No container was found matching "kindnet"
	I0831 23:42:13.319166   64902 logs.go:123] Gathering logs for kubelet ...
	I0831 23:42:13.319178   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:42:13.461347   64902 logs.go:123] Gathering logs for dmesg ...
	I0831 23:42:13.461380   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:42:13.478380   64902 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:42:13.478404   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:42:10.019587   79728 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.35:22: connect: no route to host
	W0831 23:42:13.565647   64902 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0831 23:42:13.565667   64902 logs.go:123] Gathering logs for kube-scheduler [adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e] ...
	I0831 23:42:13.565681   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e"
	I0831 23:42:13.658793   64902 logs.go:123] Gathering logs for kube-controller-manager [30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754] ...
	I0831 23:42:13.658827   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754"
	I0831 23:42:13.693873   64902 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:42:13.693901   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:42:13.925532   64902 logs.go:123] Gathering logs for etcd [ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6] ...
	I0831 23:42:13.925575   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6"
	I0831 23:42:13.967450   64902 logs.go:123] Gathering logs for kube-controller-manager [a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783] ...
	I0831 23:42:13.967476   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783"
	I0831 23:42:14.009099   64902 logs.go:123] Gathering logs for container status ...
	I0831 23:42:14.009130   64902 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0831 23:42:14.054680   64902 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.844291ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000352839s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0831 23:38:10.913280    9612 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0831 23:38:10.914144    9612 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0831 23:42:14.054748   64902 out.go:270] * 
	W0831 23:42:14.054820   64902 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.844291ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000352839s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0831 23:38:10.913280    9612 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0831 23:38:10.914144    9612 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0831 23:42:14.054837   64902 out.go:270] * 
	W0831 23:42:14.055680   64902 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 23:42:14.058554   64902 out.go:201] 
	W0831 23:42:14.059807   64902 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.844291ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000352839s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0831 23:38:10.913280    9612 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0831 23:38:10.914144    9612 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0831 23:42:14.059848   64902 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0831 23:42:14.059916   64902 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0831 23:42:14.061513   64902 out.go:201] 
	
	
	==> CRI-O <==
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.612285349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb7424cd-4e3d-4538-881c-64570edca732 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.613290390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbae9a05-26c4-4eee-b956-38d9e7751ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.613836593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725147734613809333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbae9a05-26c4-4eee-b956-38d9e7751ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.615162190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=444cc0f8-2046-4434-8abe-94f2a98a3a22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.615279290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=444cc0f8-2046-4434-8abe-94f2a98a3a22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.615397066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725147723947378900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 17
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725147627954244971,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restar
tCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6,PodSandboxId:408179907c6bb1aece54956913d8f54563fedc0f2b44d5e359a15076d05af8ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725147493591878680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44d1d76ee638961a3d7102d05c433e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e,PodSandboxId:91e9dad17224b2858ef0fd3723db997e56b360ac9b01580f10b540158d246115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725147493582939039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 444cd2379ec783962ea76609a289a609,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=444cc0f8-2046-4434-8abe-94f2a98a3a22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.647674128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a20c167-3d69-4160-a9f6-3cef3801b421 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.647744194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a20c167-3d69-4160-a9f6-3cef3801b421 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.648931351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ca3551b-29d1-43f6-a21b-66f08080f44e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.649291423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725147734649267604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ca3551b-29d1-43f6-a21b-66f08080f44e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.649900691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6815cf40-a136-4b99-9564-af7e265f2e36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.649950648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6815cf40-a136-4b99-9564-af7e265f2e36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.650053313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725147723947378900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 17
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725147627954244971,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restar
tCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6,PodSandboxId:408179907c6bb1aece54956913d8f54563fedc0f2b44d5e359a15076d05af8ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725147493591878680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44d1d76ee638961a3d7102d05c433e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e,PodSandboxId:91e9dad17224b2858ef0fd3723db997e56b360ac9b01580f10b540158d246115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725147493582939039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 444cd2379ec783962ea76609a289a609,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6815cf40-a136-4b99-9564-af7e265f2e36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.682333586Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=96faf5ab-7b1c-4aeb-9963-0d1cec84eda1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.682492521Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:91e9dad17224b2858ef0fd3723db997e56b360ac9b01580f10b540158d246115,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-945775,Uid:444cd2379ec783962ea76609a289a609,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725147493373308493,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 444cd2379ec783962ea76609a289a609,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 444cd2379ec783962ea76609a289a609,kubernetes.io/config.seen: 2024-08-31T23:38:12.910912280Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:37821e1db8db2b8b69a7502a72392d98a2da91a69991a990153ce6334c659529,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-945775,Ui
d:4fd381b51d44e1693e8f98ce1aee6f05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725147493361416250,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fd381b51d44e1693e8f98ce1aee6f05,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.125:8443,kubernetes.io/config.hash: 4fd381b51d44e1693e8f98ce1aee6f05,kubernetes.io/config.seen: 2024-08-31T23:38:12.910909884Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-945775,Uid:c7870f5e856f6c2339889519e2f77055,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725147493355226442,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c7870f5e856f6c2339889519e2f77055,kubernetes.io/config.seen: 2024-08-31T23:38:12.910911138Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:408179907c6bb1aece54956913d8f54563fedc0f2b44d5e359a15076d05af8ad,Metadata:&PodSandboxMetadata{Name:etcd-pause-945775,Uid:d44d1d76ee638961a3d7102d05c433e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725147493345427632,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44d1d76ee638961a3d7102d05c433e3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.125:2379,kubernetes.io/config.hash: d44d1d76ee6389
61a3d7102d05c433e3,kubernetes.io/config.seen: 2024-08-31T23:38:12.910906214Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=96faf5ab-7b1c-4aeb-9963-0d1cec84eda1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.683289017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9ae5688-6c7b-448b-90e8-1c86b0e3c9f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.683344255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9ae5688-6c7b-448b-90e8-1c86b0e3c9f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.683462727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725147723947378900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 17
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725147627954244971,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restar
tCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6,PodSandboxId:408179907c6bb1aece54956913d8f54563fedc0f2b44d5e359a15076d05af8ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725147493591878680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44d1d76ee638961a3d7102d05c433e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e,PodSandboxId:91e9dad17224b2858ef0fd3723db997e56b360ac9b01580f10b540158d246115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725147493582939039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 444cd2379ec783962ea76609a289a609,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9ae5688-6c7b-448b-90e8-1c86b0e3c9f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.692040611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66c0d788-556a-42db-8aea-7003befa0926 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.692096600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66c0d788-556a-42db-8aea-7003befa0926 name=/runtime.v1.RuntimeService/Version
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.696817347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7141587b-e171-49f1-a0e3-2464e1d1193c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.697171472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725147734697153359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7141587b-e171-49f1-a0e3-2464e1d1193c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.697739488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b20df2cf-fe99-4091-99d5-1de60924f233 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.697788069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b20df2cf-fe99-4091-99d5-1de60924f233 name=/runtime.v1.RuntimeService/ListContainers
	Aug 31 23:42:14 pause-945775 crio[2659]: time="2024-08-31 23:42:14.697909837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725147723947378900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 17
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783,PodSandboxId:71752829e2154f14551b6c7e0237b8d854096c92723f15fc7eb82f1df319bed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725147627954244971,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7870f5e856f6c2339889519e2f77055,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restar
tCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6,PodSandboxId:408179907c6bb1aece54956913d8f54563fedc0f2b44d5e359a15076d05af8ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725147493591878680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44d1d76ee638961a3d7102d05c433e3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e,PodSandboxId:91e9dad17224b2858ef0fd3723db997e56b360ac9b01580f10b540158d246115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725147493582939039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-945775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 444cd2379ec783962ea76609a289a609,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b20df2cf-fe99-4091-99d5-1de60924f233 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	30e5c67bfd23c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   10 seconds ago       Running             kube-controller-manager   17                  71752829e2154       kube-controller-manager-pause-945775
	a4aee5dd4e268       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   16                  71752829e2154       kube-controller-manager-pause-945775
	ed1cca658c62d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   4 minutes ago        Running             etcd                      4                   408179907c6bb       etcd-pause-945775
	adbb6b2285184       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   4 minutes ago        Running             kube-scheduler            4                   91e9dad17224b       kube-scheduler-pause-945775
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.058834] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.173340] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.150655] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.308830] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.143276] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[Aug31 23:28] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.068795] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.000196] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.116048] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.797720] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.329636] kauditd_printk_skb: 46 callbacks suppressed
	[ +14.293704] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.979321] systemd-fstab-generator[2414]: Ignoring "noauto" option for root device
	[  +0.217595] systemd-fstab-generator[2439]: Ignoring "noauto" option for root device
	[  +0.256312] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +0.179199] systemd-fstab-generator[2513]: Ignoring "noauto" option for root device
	[  +0.319718] systemd-fstab-generator[2543]: Ignoring "noauto" option for root device
	[Aug31 23:30] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.087031] kauditd_printk_skb: 175 callbacks suppressed
	[  +2.218658] systemd-fstab-generator[2891]: Ignoring "noauto" option for root device
	[ +12.473810] kauditd_printk_skb: 77 callbacks suppressed
	[Aug31 23:34] systemd-fstab-generator[8928]: Ignoring "noauto" option for root device
	[ +12.583359] kauditd_printk_skb: 71 callbacks suppressed
	[Aug31 23:38] systemd-fstab-generator[9639]: Ignoring "noauto" option for root device
	[ +13.646611] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> etcd [ed1cca658c62d6edb62025b0a03b672a61672dbb1fd8ee11fba5504b41478aa6] <==
	{"level":"info","ts":"2024-08-31T23:38:13.929012Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-31T23:38:13.929307Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e54b1639a858b2a4","initial-advertise-peer-urls":["https://192.168.83.125:2380"],"listen-peer-urls":["https://192.168.83.125:2380"],"advertise-client-urls":["https://192.168.83.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T23:38:13.929349Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T23:38:13.929385Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.83.125:2380"}
	{"level":"info","ts":"2024-08-31T23:38:13.929448Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.83.125:2380"}
	{"level":"info","ts":"2024-08-31T23:38:14.641561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e54b1639a858b2a4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-31T23:38:14.641822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e54b1639a858b2a4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-31T23:38:14.641886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e54b1639a858b2a4 received MsgPreVoteResp from e54b1639a858b2a4 at term 1"}
	{"level":"info","ts":"2024-08-31T23:38:14.641933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e54b1639a858b2a4 became candidate at term 2"}
	{"level":"info","ts":"2024-08-31T23:38:14.641998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e54b1639a858b2a4 received MsgVoteResp from e54b1639a858b2a4 at term 2"}
	{"level":"info","ts":"2024-08-31T23:38:14.642026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e54b1639a858b2a4 became leader at term 2"}
	{"level":"info","ts":"2024-08-31T23:38:14.642124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e54b1639a858b2a4 elected leader e54b1639a858b2a4 at term 2"}
	{"level":"info","ts":"2024-08-31T23:38:14.644908Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:38:14.645722Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e54b1639a858b2a4","local-member-attributes":"{Name:pause-945775 ClientURLs:[https://192.168.83.125:2379]}","request-path":"/0/members/e54b1639a858b2a4/attributes","cluster-id":"fd2e26e5161fe179","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T23:38:14.645965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:38:14.646308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T23:38:14.646492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T23:38:14.647889Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T23:38:14.647324Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:38:14.649505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T23:38:14.647366Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fd2e26e5161fe179","local-member-id":"e54b1639a858b2a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:38:14.650020Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:38:14.650119Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T23:38:14.653716Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T23:38:14.655931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.125:2379"}
	
	
	==> kernel <==
	 23:42:14 up 14 min,  0 users,  load average: 0.09, 0.15, 0.13
	Linux pause-945775 5.10.207 #1 SMP Wed Aug 28 20:54:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-controller-manager [30e5c67bfd23c03fd7f736da1a0522682586f8a67542d258a5ff7c84092b4754] <==
	I0831 23:42:04.587854       1 serving.go:386] Generated self-signed cert in-memory
	I0831 23:42:04.815137       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0831 23:42:04.815178       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:42:04.817023       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 23:42:04.817151       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 23:42:04.817579       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0831 23:42:04.817663       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0831 23:42:14.819937       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.125:8443/healthz\": dial tcp 192.168.83.125:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783] <==
	I0831 23:40:28.721260       1 serving.go:386] Generated self-signed cert in-memory
	I0831 23:40:28.935929       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0831 23:40:28.936010       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:40:28.937683       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 23:40:28.937832       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 23:40:28.937915       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0831 23:40:28.937992       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0831 23:40:38.939422       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.125:8443/healthz\": dial tcp 192.168.83.125:8443: connect: connection refused"
	
	
	==> kube-scheduler [adbb6b228518497c5775d5cf23ff943cbc44564f526c4432319d7c012b35411e] <==
	E0831 23:41:26.111000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.83.125:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:38.799242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.83.125:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:38.799326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.83.125:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:39.420760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.83.125:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:39.420945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.83.125:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:47.409669       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.83.125:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:47.409859       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.83.125:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:48.235484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.83.125:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:48.235564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.83.125:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:48.661113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.125:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:48.661192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.83.125:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:51.455792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.83.125:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:51.455856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.83.125:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:53.482433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.83.125:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:53.482503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.83.125:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:55.167990       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.83.125:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:55.168034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.83.125:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:41:57.444545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.83.125:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:41:57.444746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.83.125:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:42:01.367319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.83.125:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:42:01.367395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.83.125:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:42:03.370066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.125:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:42:03.370149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.83.125:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	W0831 23:42:06.908660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.83.125:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.125:8443: connect: connection refused
	E0831 23:42:06.908801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.83.125:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 31 23:41:59 pause-945775 kubelet[9646]: E0831 23:41:59.022050    9646 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-945775&limit=500&resourceVersion=0\": dial tcp 192.168.83.125:8443: connect: connection refused" logger="UnhandledError"
	Aug 31 23:42:01 pause-945775 kubelet[9646]: E0831 23:42:01.944737    9646 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-pause-945775_kube-system_4fd381b51d44e1693e8f98ce1aee6f05_1\" is already in use by cb5b210296d2e6bddcb243608f9115d278ed27197487d112cae21420230754e3. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="37821e1db8db2b8b69a7502a72392d98a2da91a69991a990153ce6334c659529"
	Aug 31 23:42:01 pause-945775 kubelet[9646]: E0831 23:42:01.944903    9646 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.31.0,Command:[kube-apiserver --advertise-address=192.168.83.125 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-typ
es=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMo
unts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.83.125,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8443 },Host:192.168.83.125,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSe
conds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.83.125,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-pause-945775_kube-system(4fd381b51d44e1693e8f98ce1aee6f05): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-pause-945775_kube-system_4fd381b51d44e1693e8f98ce1aee6f05_1\" is already in use by cb5b210296d2e6bddcb243608f9115d278ed27197487d112cae21420230754e3. You h
ave to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 31 23:42:01 pause-945775 kubelet[9646]: E0831 23:42:01.946109    9646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-pause-945775_kube-system_4fd381b51d44e1693e8f98ce1aee6f05_1\\\" is already in use by cb5b210296d2e6bddcb243608f9115d278ed27197487d112cae21420230754e3. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-pause-945775" podUID="4fd381b51d44e1693e8f98ce1aee6f05"
	Aug 31 23:42:02 pause-945775 kubelet[9646]: E0831 23:42:02.592592    9646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-945775?timeout=10s\": dial tcp 192.168.83.125:8443: connect: connection refused" interval="7s"
	Aug 31 23:42:02 pause-945775 kubelet[9646]: I0831 23:42:02.787322    9646 kubelet_node_status.go:72] "Attempting to register node" node="pause-945775"
	Aug 31 23:42:02 pause-945775 kubelet[9646]: E0831 23:42:02.788528    9646 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.125:8443: connect: connection refused" node="pause-945775"
	Aug 31 23:42:03 pause-945775 kubelet[9646]: E0831 23:42:03.011241    9646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725147723011007657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:42:03 pause-945775 kubelet[9646]: E0831 23:42:03.011289    9646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725147723011007657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:42:03 pause-945775 kubelet[9646]: E0831 23:42:03.564982    9646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.83.125:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-945775.17f0f4812e4f2c75  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-945775,UID:pause-945775,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node pause-945775 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:pause-945775,},FirstTimestamp:2024-08-31 23:38:12.947799157 +0000 UTC m=+0.467769821,LastTimestamp:2024-08-31 23:38:12.947799157 +0000 UTC m=+0.467769821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-945775,}"
	Aug 31 23:42:03 pause-945775 kubelet[9646]: I0831 23:42:03.937049    9646 scope.go:117] "RemoveContainer" containerID="a4aee5dd4e268cf73d5d181851df407a1f92c3a1d84aa2db4c466f7b4dce9783"
	Aug 31 23:42:09 pause-945775 kubelet[9646]: E0831 23:42:09.594094    9646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-945775?timeout=10s\": dial tcp 192.168.83.125:8443: connect: connection refused" interval="7s"
	Aug 31 23:42:09 pause-945775 kubelet[9646]: I0831 23:42:09.790866    9646 kubelet_node_status.go:72] "Attempting to register node" node="pause-945775"
	Aug 31 23:42:09 pause-945775 kubelet[9646]: E0831 23:42:09.791842    9646 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.125:8443: connect: connection refused" node="pause-945775"
	Aug 31 23:42:12 pause-945775 kubelet[9646]: E0831 23:42:12.952854    9646 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 23:42:12 pause-945775 kubelet[9646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 23:42:12 pause-945775 kubelet[9646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 23:42:12 pause-945775 kubelet[9646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 23:42:12 pause-945775 kubelet[9646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 23:42:13 pause-945775 kubelet[9646]: E0831 23:42:13.013276    9646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725147733012939275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:42:13 pause-945775 kubelet[9646]: E0831 23:42:13.013327    9646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725147733012939275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:42:13 pause-945775 kubelet[9646]: E0831 23:42:13.566736    9646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.83.125:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-945775.17f0f4812e4f2c75  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-945775,UID:pause-945775,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node pause-945775 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:pause-945775,},FirstTimestamp:2024-08-31 23:38:12.947799157 +0000 UTC m=+0.467769821,LastTimestamp:2024-08-31 23:38:12.947799157 +0000 UTC m=+0.467769821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-945775,}"
	Aug 31 23:42:13 pause-945775 kubelet[9646]: E0831 23:42:13.952242    9646 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-pause-945775_kube-system_4fd381b51d44e1693e8f98ce1aee6f05_1\" is already in use by cb5b210296d2e6bddcb243608f9115d278ed27197487d112cae21420230754e3. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="37821e1db8db2b8b69a7502a72392d98a2da91a69991a990153ce6334c659529"
	Aug 31 23:42:13 pause-945775 kubelet[9646]: E0831 23:42:13.952401    9646 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.31.0,Command:[kube-apiserver --advertise-address=192.168.83.125 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-typ
es=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMo
unts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.83.125,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8443 },Host:192.168.83.125,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSe
conds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.83.125,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-pause-945775_kube-system(4fd381b51d44e1693e8f98ce1aee6f05): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-pause-945775_kube-system_4fd381b51d44e1693e8f98ce1aee6f05_1\" is already in use by cb5b210296d2e6bddcb243608f9115d278ed27197487d112cae21420230754e3. You h
ave to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 31 23:42:13 pause-945775 kubelet[9646]: E0831 23:42:13.953703    9646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-pause-945775_kube-system_4fd381b51d44e1693e8f98ce1aee6f05_1\\\" is already in use by cb5b210296d2e6bddcb243608f9115d278ed27197487d112cae21420230754e3. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-pause-945775" podUID="4fd381b51d44e1693e8f98ce1aee6f05"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-945775 -n pause-945775
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-945775 -n pause-945775: exit status 2 (220.094599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "pause-945775" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (836.84s)

                                                
                                    

Test pass (220/270)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 31.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 15.15
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.13
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 91.08
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 202.03
31 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/parallel/InspektorGadget 12.11
37 TestAddons/parallel/HelmTiller 10.99
39 TestAddons/parallel/CSI 52.26
40 TestAddons/parallel/Headlamp 14.16
41 TestAddons/parallel/CloudSpanner 6.73
42 TestAddons/parallel/LocalPath 55.73
43 TestAddons/parallel/NvidiaDevicePlugin 6.47
44 TestAddons/parallel/Yakd 10.97
45 TestAddons/StoppedEnableDisable 7.55
46 TestCertOptions 90.71
47 TestCertExpiration 277.75
49 TestForceSystemdFlag 78.26
50 TestForceSystemdEnv 55.5
52 TestKVMDriverInstallOrUpdate 5.36
56 TestErrorSpam/setup 45.4
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.59
60 TestErrorSpam/unpause 1.7
61 TestErrorSpam/stop 5.61
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 55.64
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 45.71
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.39
73 TestFunctional/serial/CacheCmd/cache/add_local 2.28
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 34.31
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.38
84 TestFunctional/serial/LogsFileCmd 1.39
85 TestFunctional/serial/InvalidService 4.81
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 42.87
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.99
95 TestFunctional/parallel/ServiceCmdConnect 11.49
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 52.42
99 TestFunctional/parallel/SSHCmd 0.38
100 TestFunctional/parallel/CpCmd 1.26
101 TestFunctional/parallel/MySQL 30.71
102 TestFunctional/parallel/FileSync 0.19
103 TestFunctional/parallel/CertSync 1.27
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
111 TestFunctional/parallel/License 0.64
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.48
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
118 TestFunctional/parallel/ImageCommands/ImageBuild 5.65
119 TestFunctional/parallel/ImageCommands/Setup 1.99
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
123 TestFunctional/parallel/ServiceCmd/DeployApp 11.24
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
140 TestFunctional/parallel/ServiceCmd/List 0.41
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
143 TestFunctional/parallel/ServiceCmd/Format 0.27
144 TestFunctional/parallel/ServiceCmd/URL 0.43
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
146 TestFunctional/parallel/ProfileCmd/profile_list 0.36
147 TestFunctional/parallel/MountCmd/any-port 28.7
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
149 TestFunctional/parallel/MountCmd/specific-port 1.69
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 202.51
158 TestMultiControlPlane/serial/DeployApp 7.78
159 TestMultiControlPlane/serial/PingHostFromPods 1.17
160 TestMultiControlPlane/serial/AddWorkerNode 55.4
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
163 TestMultiControlPlane/serial/CopyFile 12.31
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 225.54
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 77.55
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
179 TestJSONOutput/start/Command 53.96
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.7
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.6
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.35
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 90.99
211 TestMountStart/serial/StartWithMountFirst 27.86
212 TestMountStart/serial/VerifyMountFirst 0.35
213 TestMountStart/serial/StartWithMountSecond 25.06
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 23.57
219 TestMountStart/serial/VerifyMountPostStop 0.37
223 TestMultiNode/serial/FreshStart2Nodes 115.76
224 TestMultiNode/serial/DeployApp2Nodes 5.88
225 TestMultiNode/serial/PingHostFrom2Pods 0.77
226 TestMultiNode/serial/AddNode 53.26
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.21
229 TestMultiNode/serial/CopyFile 7.01
230 TestMultiNode/serial/StopNode 2.31
231 TestMultiNode/serial/StartAfterStop 39.91
233 TestMultiNode/serial/DeleteNode 2.01
235 TestMultiNode/serial/RestartMultiNode 179.52
236 TestMultiNode/serial/ValidateNameConflict 43.09
243 TestScheduledStopUnix 113.72
247 TestRunningBinaryUpgrade 230.12
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
253 TestNoKubernetes/serial/StartWithK8s 101.98
261 TestNetworkPlugins/group/false 6
265 TestStoppedBinaryUpgrade/Setup 2.66
266 TestStoppedBinaryUpgrade/Upgrade 140.67
267 TestNoKubernetes/serial/StartWithStopK8s 62.26
268 TestNoKubernetes/serial/Start 27.73
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
270 TestNoKubernetes/serial/ProfileList 31.99
271 TestNoKubernetes/serial/Stop 1.34
272 TestNoKubernetes/serial/StartNoArgs 22.24
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
283 TestPause/serial/Start 53.68
284 TestNetworkPlugins/group/auto/Start 64.11
286 TestNetworkPlugins/group/auto/KubeletFlags 0.2
287 TestNetworkPlugins/group/auto/NetCatPod 12.25
288 TestNetworkPlugins/group/auto/DNS 0.2
289 TestNetworkPlugins/group/auto/Localhost 0.13
290 TestNetworkPlugins/group/auto/HairPin 0.14
291 TestNetworkPlugins/group/kindnet/Start 64.9
292 TestNetworkPlugins/group/calico/Start 100.91
293 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
295 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
296 TestNetworkPlugins/group/custom-flannel/Start 78.86
297 TestNetworkPlugins/group/kindnet/DNS 0.2
298 TestNetworkPlugins/group/kindnet/Localhost 0.13
299 TestNetworkPlugins/group/kindnet/HairPin 0.14
300 TestNetworkPlugins/group/enable-default-cni/Start 84.52
301 TestNetworkPlugins/group/calico/ControllerPod 6.03
302 TestNetworkPlugins/group/calico/KubeletFlags 0.31
303 TestNetworkPlugins/group/calico/NetCatPod 11.4
304 TestNetworkPlugins/group/calico/DNS 0.21
305 TestNetworkPlugins/group/calico/Localhost 0.16
306 TestNetworkPlugins/group/calico/HairPin 0.16
307 TestNetworkPlugins/group/flannel/Start 71.21
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
310 TestNetworkPlugins/group/custom-flannel/DNS 0.2
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.34
315 TestNetworkPlugins/group/bridge/Start 87.23
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
323 TestNetworkPlugins/group/flannel/NetCatPod 11.45
324 TestNetworkPlugins/group/flannel/DNS 0.16
325 TestNetworkPlugins/group/flannel/Localhost 0.14
326 TestNetworkPlugins/group/flannel/HairPin 0.12
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
330 TestNetworkPlugins/group/bridge/NetCatPod 11.25
331 TestNetworkPlugins/group/bridge/DNS 0.19
332 TestNetworkPlugins/group/bridge/Localhost 0.15
333 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (31.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-160287 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-160287 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (31.571755063s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (31.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-160287
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-160287: exit status 85 (57.573873ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |          |
	|         | -p download-only-160287        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:05:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:05:49.244905   20381 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:05:49.245172   20381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:49.245182   20381 out.go:358] Setting ErrFile to fd 2...
	I0831 22:05:49.245186   20381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:49.245425   20381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	W0831 22:05:49.245582   20381 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18943-13149/.minikube/config/config.json: open /home/jenkins/minikube-integration/18943-13149/.minikube/config/config.json: no such file or directory
	I0831 22:05:49.246231   20381 out.go:352] Setting JSON to true
	I0831 22:05:49.247131   20381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2896,"bootTime":1725139053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:05:49.247189   20381 start.go:139] virtualization: kvm guest
	I0831 22:05:49.249646   20381 out.go:97] [download-only-160287] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:05:49.249760   20381 notify.go:220] Checking for updates...
	W0831 22:05:49.249785   20381 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:05:49.251136   20381 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:05:49.252431   20381 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:05:49.253864   20381 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:05:49.255186   20381 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:05:49.256395   20381 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0831 22:05:49.258631   20381 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:05:49.258885   20381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:05:49.357519   20381 out.go:97] Using the kvm2 driver based on user configuration
	I0831 22:05:49.357551   20381 start.go:297] selected driver: kvm2
	I0831 22:05:49.357558   20381 start.go:901] validating driver "kvm2" against <nil>
	I0831 22:05:49.358010   20381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:05:49.358149   20381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:05:49.373086   20381 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:05:49.373158   20381 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:05:49.373645   20381 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0831 22:05:49.373824   20381 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:05:49.373887   20381 cni.go:84] Creating CNI manager for ""
	I0831 22:05:49.373899   20381 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:05:49.373907   20381 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:05:49.373963   20381 start.go:340] cluster config:
	{Name:download-only-160287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-160287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:05:49.374136   20381 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:05:49.376077   20381 out.go:97] Downloading VM boot image ...
	I0831 22:05:49.376126   20381 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/iso/amd64/minikube-v1.33.1-1724862017-19530-amd64.iso
	I0831 22:06:03.758212   20381 out.go:97] Starting "download-only-160287" primary control-plane node in "download-only-160287" cluster
	I0831 22:06:03.758236   20381 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 22:06:03.871635   20381 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:06:03.871675   20381 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:03.871828   20381 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 22:06:03.873852   20381 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0831 22:06:03.873864   20381 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0831 22:06:04.009254   20381 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:06:17.561024   20381 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0831 22:06:17.561134   20381 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0831 22:06:18.467007   20381 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0831 22:06:18.467374   20381 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/download-only-160287/config.json ...
	I0831 22:06:18.467404   20381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/download-only-160287/config.json: {Name:mk55008c3cd652486a4c32a1ac5654a78ebab389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:18.467553   20381 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 22:06:18.467720   20381 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-160287 host does not exist
	  To start a cluster, run: "minikube start -p download-only-160287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-160287
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (15.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-777221 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-777221 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.144907705s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (15.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-777221
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-777221: exit status 85 (133.89501ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p download-only-160287        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| delete  | -p download-only-160287        | download-only-160287 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| start   | -o=json --download-only        | download-only-777221 | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | -p download-only-777221        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:21.137115   20655 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:21.137257   20655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:21.137268   20655 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:21.137275   20655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:21.137456   20655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:06:21.138049   20655 out.go:352] Setting JSON to true
	I0831 22:06:21.138913   20655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2928,"bootTime":1725139053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:06:21.138971   20655 start.go:139] virtualization: kvm guest
	I0831 22:06:21.141196   20655 out.go:97] [download-only-777221] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:06:21.141378   20655 notify.go:220] Checking for updates...
	I0831 22:06:21.142945   20655 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:06:21.144523   20655 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:21.145977   20655 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:06:21.147546   20655 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:06:21.149050   20655 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0831 22:06:21.151640   20655 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:06:21.151881   20655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:21.184580   20655 out.go:97] Using the kvm2 driver based on user configuration
	I0831 22:06:21.184622   20655 start.go:297] selected driver: kvm2
	I0831 22:06:21.184631   20655 start.go:901] validating driver "kvm2" against <nil>
	I0831 22:06:21.184952   20655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:21.185056   20655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18943-13149/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0831 22:06:21.200418   20655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0831 22:06:21.200470   20655 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:21.200921   20655 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0831 22:06:21.201087   20655 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:06:21.201177   20655 cni.go:84] Creating CNI manager for ""
	I0831 22:06:21.201192   20655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0831 22:06:21.201204   20655 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:21.201308   20655 start.go:340] cluster config:
	{Name:download-only-777221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-777221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:21.201428   20655 iso.go:125] acquiring lock: {Name:mk8e8d759e9a58ffaa0f141d41ab761a29ec73f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:21.203211   20655 out.go:97] Starting "download-only-777221" primary control-plane node in "download-only-777221" cluster
	I0831 22:06:21.203236   20655 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:06:21.795194   20655 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:06:21.795233   20655 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:21.795430   20655 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:06:21.797067   20655 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0831 22:06:21.797083   20655 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0831 22:06:21.919669   20655 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0831 22:06:34.504893   20655 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0831 22:06:34.504983   20655 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-13149/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0831 22:06:35.242733   20655 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:06:35.243066   20655 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/download-only-777221/config.json ...
	I0831 22:06:35.243097   20655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/download-only-777221/config.json: {Name:mkbc8bebf86aa384ce3494cf4897c2514d66dbde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:35.243251   20655 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:06:35.243394   20655 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18943-13149/.minikube/cache/linux/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-777221 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777221"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-777221
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-465268 --alsologtostderr --binary-mirror http://127.0.0.1:45273 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-465268" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-465268
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (91.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-651504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-651504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.36814216s)
helpers_test.go:176: Cleaning up "offline-crio-651504" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-651504
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-651504: (1.709970927s)
--- PASS: TestOffline (91.08s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-132210
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-132210: exit status 85 (48.464094ms)

                                                
                                                
-- stdout --
	* Profile "addons-132210" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-132210"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-132210
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-132210: exit status 85 (47.211113ms)

                                                
                                                
-- stdout --
	* Profile "addons-132210" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-132210"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (202.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-132210 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-132210 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m22.030795366s)
--- PASS: TestAddons/Setup (202.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-132210 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-132210 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:345: "gadget-prj4s" [7093af89-3599-4817-9ce3-a552e8b7f61b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004805544s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-132210
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-132210: (6.09928443s)
--- PASS: TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.99s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.394917ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:345: "tiller-deploy-b48cc5f79-lljvg" [d3d10da4-8063-4e9f-a3a6-d02d24b61855] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004324469s
addons_test.go:475: (dbg) Run:  kubectl --context addons-132210 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-132210 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.436058024s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.32258ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-132210 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-132210 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:345: "task-pv-pod" [018bd545-1a36-43c0-9e72-ad3dd1c3720a] Pending
helpers_test.go:345: "task-pv-pod" [018bd545-1a36-43c0-9e72-ad3dd1c3720a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod" [018bd545-1a36-43c0-9e72-ad3dd1c3720a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003681159s
addons_test.go:590: (dbg) Run:  kubectl --context addons-132210 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:420: (dbg) Run:  kubectl --context addons-132210 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:420: (dbg) Run:  kubectl --context addons-132210 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-132210 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-132210 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-132210 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-132210 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:345: "task-pv-pod-restore" [35008dd3-2b78-4080-bfee-529bc9096c15] Pending
helpers_test.go:345: "task-pv-pod-restore" [35008dd3-2b78-4080-bfee-529bc9096c15] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod-restore" [35008dd3-2b78-4080-bfee-529bc9096c15] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003873834s
addons_test.go:632: (dbg) Run:  kubectl --context addons-132210 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-132210 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-132210 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.746814969s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-132210 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:345: "headlamp-57fb76fcdb-zb4l7" [ebe68c93-bd00-4fed-bf1c-dbf120b29acd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-zb4l7" [ebe68c93-bd00-4fed-bf1c-dbf120b29acd] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003968458s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (14.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:345: "cloud-spanner-emulator-769b77f747-v8q5p" [30d973a7-1840-4e94-8936-8ff2a7e89cad] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003704142s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-132210
--- PASS: TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.73s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-132210 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-132210 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:345: "test-local-path" [6593e5e6-5f38-41a7-947b-800fd9b9dcdb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "test-local-path" [6593e5e6-5f38-41a7-947b-800fd9b9dcdb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "test-local-path" [6593e5e6-5f38-41a7-947b-800fd9b9dcdb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004645312s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-132210 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 ssh "cat /opt/local-path-provisioner/pvc-4b3d56ec-b617-42e5-a22c-ca5c5d7808cd_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-132210 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-132210 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.878674202s)
--- PASS: TestAddons/parallel/LocalPath (55.73s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:345: "nvidia-device-plugin-daemonset-99v85" [54398aec-2cfe-4328-a845-e1bd4bbfc99f] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004156652s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-132210
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:345: "yakd-dashboard-67d98fc6b-vrmgb" [f15adb8e-7ef9-4f26-9a2b-44e0d4f7cfb5] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007509203s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-132210 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-132210 addons disable yakd --alsologtostderr -v=1: (5.963766063s)
--- PASS: TestAddons/parallel/Yakd (10.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-132210
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-132210: (7.283782554s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-132210
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-132210
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-132210
--- PASS: TestAddons/StoppedEnableDisable (7.55s)

                                                
                                    
x
+
TestCertOptions (90.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-251432 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0831 23:26:42.546824   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-251432 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m29.4913618s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-251432 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-251432 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-251432 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-251432" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-251432
--- PASS: TestCertOptions (90.71s)

                                                
                                    
x
+
TestCertExpiration (277.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-678368 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-678368 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (57.014829493s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-678368 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0831 23:29:59.874631   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-678368 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.637204966s)
helpers_test.go:176: Cleaning up "cert-expiration-678368" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-678368
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-678368: (1.091670813s)
--- PASS: TestCertExpiration (277.75s)

                                                
                                    
x
+
TestForceSystemdFlag (78.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-260827 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-260827 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.264926544s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-260827 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-260827" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-260827
--- PASS: TestForceSystemdFlag (78.26s)

                                                
                                    
x
+
TestForceSystemdEnv (55.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-828173 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-828173 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.713857372s)
helpers_test.go:176: Cleaning up "force-systemd-env-828173" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-828173
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-828173: (10.790058875s)
--- PASS: TestForceSystemdEnv (55.50s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.36s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.36s)

                                                
                                    
x
+
TestErrorSpam/setup (45.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-655004 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-655004 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-655004 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-655004 --driver=kvm2  --container-runtime=crio: (45.404376221s)
--- PASS: TestErrorSpam/setup (45.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 stop: (2.296972341s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 stop: (1.945267469s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-655004 --log_dir /tmp/nospam-655004 stop: (1.369674748s)
--- PASS: TestErrorSpam/stop (5.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18943-13149/.minikube/files/etc/test/nested/copy/20369/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882363 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0831 22:24:59.875117   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:59.882139   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:59.893459   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:59.914834   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:59.956284   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:00.037774   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:00.199388   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:00.521089   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:01.163193   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:02.444813   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:05.006978   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-882363 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.643838933s)
--- PASS: TestFunctional/serial/StartWithProxy (55.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882363 --alsologtostderr -v=8
E0831 22:25:10.129308   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:20.371669   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:25:40.853714   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-882363 --alsologtostderr -v=8: (45.709930189s)
functional_test.go:663: soft start took 45.710549583s for "functional-882363" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-882363 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 cache add registry.k8s.io/pause:3.1: (1.037795314s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 cache add registry.k8s.io/pause:3.3: (1.236939492s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 cache add registry.k8s.io/pause:latest: (1.113533272s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-882363 /tmp/TestFunctionalserialCacheCmdcacheadd_local3709303254/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cache add minikube-local-cache-test:functional-882363
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 cache add minikube-local-cache-test:functional-882363: (1.947006579s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cache delete minikube-local-cache-test:functional-882363
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-882363
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.819281ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 kubectl -- --context functional-882363 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-882363 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882363 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0831 22:26:21.815639   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-882363 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.307934518s)
functional_test.go:761: restart took 34.3080463s for "functional-882363" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-882363 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 logs: (1.381485965s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 logs --file /tmp/TestFunctionalserialLogsFileCmd902479994/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 logs --file /tmp/TestFunctionalserialLogsFileCmd902479994/001/logs.txt: (1.389135945s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-882363 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-882363
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-882363: exit status 115 (263.53942ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.142:32481 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-882363 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-882363 delete -f testdata/invalidsvc.yaml: (1.350651114s)
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 config get cpus: exit status 14 (48.613894ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 config get cpus: exit status 14 (48.198705ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (42.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882363 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882363 --alsologtostderr -v=1] ...
helpers_test.go:509: unable to kill pid 31334: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (42.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (130.862609ms)

                                                
                                                
-- stdout --
	* [functional-882363] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:26:56.485306   31121 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:26:56.485556   31121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:56.485564   31121 out.go:358] Setting ErrFile to fd 2...
	I0831 22:26:56.485569   31121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:56.485759   31121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:26:56.486266   31121 out.go:352] Setting JSON to false
	I0831 22:26:56.487169   31121 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4163,"bootTime":1725139053,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:26:56.487220   31121 start.go:139] virtualization: kvm guest
	I0831 22:26:56.489249   31121 out.go:177] * [functional-882363] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 22:26:56.490565   31121 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:26:56.490559   31121 notify.go:220] Checking for updates...
	I0831 22:26:56.491908   31121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:26:56.493205   31121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:26:56.494558   31121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:26:56.495760   31121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:26:56.496994   31121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:26:56.498630   31121 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:26:56.498994   31121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:26:56.499057   31121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:26:56.514506   31121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44849
	I0831 22:26:56.514867   31121 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:26:56.515395   31121 main.go:141] libmachine: Using API Version  1
	I0831 22:26:56.515415   31121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:26:56.515887   31121 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:26:56.516110   31121 main.go:141] libmachine: (functional-882363) Calling .DriverName
	I0831 22:26:56.516342   31121 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:26:56.516663   31121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:26:56.516702   31121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:26:56.531518   31121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0831 22:26:56.531870   31121 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:26:56.532314   31121 main.go:141] libmachine: Using API Version  1
	I0831 22:26:56.532345   31121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:26:56.532660   31121 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:26:56.532865   31121 main.go:141] libmachine: (functional-882363) Calling .DriverName
	I0831 22:26:56.565600   31121 out.go:177] * Using the kvm2 driver based on existing profile
	I0831 22:26:56.566797   31121 start.go:297] selected driver: kvm2
	I0831 22:26:56.566816   31121 start.go:901] validating driver "kvm2" against &{Name:functional-882363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-882363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:26:56.566958   31121 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:26:56.569296   31121 out.go:201] 
	W0831 22:26:56.570581   31121 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0831 22:26:56.571888   31121 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882363 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (130.339345ms)

                                                
                                                
-- stdout --
	* [functional-882363] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:26:56.749595   31176 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:26:56.749697   31176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:56.749706   31176 out.go:358] Setting ErrFile to fd 2...
	I0831 22:26:56.749710   31176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:56.749969   31176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 22:26:56.750459   31176 out.go:352] Setting JSON to false
	I0831 22:26:56.751337   31176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4164,"bootTime":1725139053,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:26:56.751404   31176 start.go:139] virtualization: kvm guest
	I0831 22:26:56.753591   31176 out.go:177] * [functional-882363] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0831 22:26:56.754854   31176 notify.go:220] Checking for updates...
	I0831 22:26:56.754873   31176 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:26:56.756034   31176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:26:56.757137   31176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 22:26:56.758348   31176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 22:26:56.759449   31176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:26:56.760592   31176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:26:56.762244   31176 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:26:56.762901   31176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:26:56.762949   31176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:26:56.779864   31176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0831 22:26:56.780260   31176 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:26:56.780780   31176 main.go:141] libmachine: Using API Version  1
	I0831 22:26:56.780804   31176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:26:56.781199   31176 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:26:56.781419   31176 main.go:141] libmachine: (functional-882363) Calling .DriverName
	I0831 22:26:56.781651   31176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:26:56.781931   31176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 22:26:56.781966   31176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 22:26:56.797462   31176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46537
	I0831 22:26:56.797838   31176 main.go:141] libmachine: () Calling .GetVersion
	I0831 22:26:56.798265   31176 main.go:141] libmachine: Using API Version  1
	I0831 22:26:56.798284   31176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 22:26:56.798588   31176 main.go:141] libmachine: () Calling .GetMachineName
	I0831 22:26:56.798749   31176 main.go:141] libmachine: (functional-882363) Calling .DriverName
	I0831 22:26:56.830441   31176 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0831 22:26:56.831672   31176 start.go:297] selected driver: kvm2
	I0831 22:26:56.831685   31176 start.go:901] validating driver "kvm2" against &{Name:functional-882363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-882363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:26:56.831774   31176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:26:56.833682   31176 out.go:201] 
	W0831 22:26:56.834737   31176 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0831 22:26:56.835821   31176 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-882363 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-882363 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:345: "hello-node-connect-67bdd5bbb4-pvclt" [b7b11783-4c19-48de-92a1-fb849a6e1b20] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:345: "hello-node-connect-67bdd5bbb4-pvclt" [b7b11783-4c19-48de-92a1-fb849a6e1b20] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004821106s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.142:30898
functional_test.go:1675: http://192.168.39.142:30898: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-pvclt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.142:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.142:30898
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:345: "storage-provisioner" [9004f9f5-46ac-46dd-a07e-f291a148a5f9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004469422s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-882363 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-882363 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-882363 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-882363 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-882363 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [a07c7f6c-a0e0-4502-b57b-1244ef1c9991] Pending
helpers_test.go:345: "sp-pod" [a07c7f6c-a0e0-4502-b57b-1244ef1c9991] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [a07c7f6c-a0e0-4502-b57b-1244ef1c9991] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004641659s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-882363 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-882363 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-882363 delete -f testdata/storage-provisioner/pod.yaml: (1.92680382s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-882363 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [979aa7b1-bcfe-4b14-b864-19cce007bd0e] Pending
helpers_test.go:345: "sp-pod" [979aa7b1-bcfe-4b14-b864-19cce007bd0e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [979aa7b1-bcfe-4b14-b864-19cce007bd0e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004822459s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-882363 exec sp-pod -- ls /tmp/mount
2024/08/31 22:27:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh -n functional-882363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cp functional-882363:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3086678246/001/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh -n functional-882363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh -n functional-882363 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-882363 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:345: "mysql-6cdb49bbb-wn2g8" [a6646625-8401-485d-b833-7d49af5560f2] Pending
helpers_test.go:345: "mysql-6cdb49bbb-wn2g8" [a6646625-8401-485d-b833-7d49af5560f2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:345: "mysql-6cdb49bbb-wn2g8" [a6646625-8401-485d-b833-7d49af5560f2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.004324858s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-882363 exec mysql-6cdb49bbb-wn2g8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-882363 exec mysql-6cdb49bbb-wn2g8 -- mysql -ppassword -e "show databases;": exit status 1 (139.484635ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-882363 exec mysql-6cdb49bbb-wn2g8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20369/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo cat /etc/test/nested/copy/20369/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20369.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo cat /etc/ssl/certs/20369.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20369.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo cat /usr/share/ca-certificates/20369.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/203692.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo cat /etc/ssl/certs/203692.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/203692.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo cat /usr/share/ca-certificates/203692.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-882363 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh "sudo systemctl is-active docker": exit status 1 (211.81906ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh "sudo systemctl is-active containerd": exit status 1 (192.275765ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882363 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-882363
localhost/kicbase/echo-server:functional-882363
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882363 image ls --format short --alsologtostderr:
I0831 22:27:23.756968   31608 out.go:345] Setting OutFile to fd 1 ...
I0831 22:27:23.757079   31608 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:23.757089   31608 out.go:358] Setting ErrFile to fd 2...
I0831 22:27:23.757095   31608 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:23.757299   31608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
I0831 22:27:23.757848   31608 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:23.757955   31608 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:23.758327   31608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:23.758378   31608 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:23.772833   31608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45911
I0831 22:27:23.773309   31608 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:23.773903   31608 main.go:141] libmachine: Using API Version  1
I0831 22:27:23.773923   31608 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:23.774279   31608 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:23.774549   31608 main.go:141] libmachine: (functional-882363) Calling .GetState
I0831 22:27:23.776564   31608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:23.776604   31608 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:23.791361   31608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38157
I0831 22:27:23.791789   31608 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:23.792261   31608 main.go:141] libmachine: Using API Version  1
I0831 22:27:23.792289   31608 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:23.792588   31608 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:23.792763   31608 main.go:141] libmachine: (functional-882363) Calling .DriverName
I0831 22:27:23.792967   31608 ssh_runner.go:195] Run: systemctl --version
I0831 22:27:23.792996   31608 main.go:141] libmachine: (functional-882363) Calling .GetSSHHostname
I0831 22:27:23.795888   31608 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:23.796262   31608 main.go:141] libmachine: (functional-882363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:57:fc", ip: ""} in network mk-functional-882363: {Iface:virbr1 ExpiryTime:2024-08-31 23:24:24 +0000 UTC Type:0 Mac:52:54:00:d1:57:fc Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-882363 Clientid:01:52:54:00:d1:57:fc}
I0831 22:27:23.796294   31608 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined IP address 192.168.39.142 and MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:23.796438   31608 main.go:141] libmachine: (functional-882363) Calling .GetSSHPort
I0831 22:27:23.796578   31608 main.go:141] libmachine: (functional-882363) Calling .GetSSHKeyPath
I0831 22:27:23.796698   31608 main.go:141] libmachine: (functional-882363) Calling .GetSSHUsername
I0831 22:27:23.796852   31608 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/functional-882363/id_rsa Username:docker}
I0831 22:27:23.911562   31608 ssh_runner.go:195] Run: sudo crictl images --output json
I0831 22:27:23.974549   31608 main.go:141] libmachine: Making call to close driver server
I0831 22:27:23.974565   31608 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:23.974806   31608 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:23.974824   31608 main.go:141] libmachine: Making call to close connection to plugin binary
I0831 22:27:23.974838   31608 main.go:141] libmachine: Making call to close driver server
I0831 22:27:23.974846   31608 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:23.975050   31608 main.go:141] libmachine: (functional-882363) DBG | Closing plugin on server side
I0831 22:27:23.975145   31608 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:23.975191   31608 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882363 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/minikube-local-cache-test     | functional-882363  | 23dfc15562327 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/kicbase/echo-server           | functional-882363  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882363 image ls --format table --alsologtostderr:
I0831 22:27:28.277805   32162 out.go:345] Setting OutFile to fd 1 ...
I0831 22:27:28.278071   32162 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:28.278082   32162 out.go:358] Setting ErrFile to fd 2...
I0831 22:27:28.278086   32162 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:28.278270   32162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
I0831 22:27:28.278795   32162 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:28.278884   32162 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:28.279234   32162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:28.279284   32162 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:28.293476   32162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
I0831 22:27:28.293873   32162 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:28.294472   32162 main.go:141] libmachine: Using API Version  1
I0831 22:27:28.294493   32162 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:28.294770   32162 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:28.294940   32162 main.go:141] libmachine: (functional-882363) Calling .GetState
I0831 22:27:28.296619   32162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:28.296651   32162 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:28.310767   32162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
I0831 22:27:28.311248   32162 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:28.311736   32162 main.go:141] libmachine: Using API Version  1
I0831 22:27:28.311756   32162 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:28.312064   32162 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:28.312224   32162 main.go:141] libmachine: (functional-882363) Calling .DriverName
I0831 22:27:28.312388   32162 ssh_runner.go:195] Run: systemctl --version
I0831 22:27:28.312415   32162 main.go:141] libmachine: (functional-882363) Calling .GetSSHHostname
I0831 22:27:28.315082   32162 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:28.315519   32162 main.go:141] libmachine: (functional-882363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:57:fc", ip: ""} in network mk-functional-882363: {Iface:virbr1 ExpiryTime:2024-08-31 23:24:24 +0000 UTC Type:0 Mac:52:54:00:d1:57:fc Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-882363 Clientid:01:52:54:00:d1:57:fc}
I0831 22:27:28.315555   32162 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined IP address 192.168.39.142 and MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:28.315700   32162 main.go:141] libmachine: (functional-882363) Calling .GetSSHPort
I0831 22:27:28.315849   32162 main.go:141] libmachine: (functional-882363) Calling .GetSSHKeyPath
I0831 22:27:28.315972   32162 main.go:141] libmachine: (functional-882363) Calling .GetSSHUsername
I0831 22:27:28.316152   32162 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/functional-882363/id_rsa Username:docker}
I0831 22:27:28.394097   32162 ssh_runner.go:195] Run: sudo crictl images --output json
I0831 22:27:28.451177   32162 main.go:141] libmachine: Making call to close driver server
I0831 22:27:28.451191   32162 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:28.451456   32162 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:28.451471   32162 main.go:141] libmachine: Making call to close connection to plugin binary
I0831 22:27:28.451480   32162 main.go:141] libmachine: Making call to close driver server
I0831 22:27:28.451487   32162 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:28.451719   32162 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:28.451748   32162 main.go:141] libmachine: Making call to close connection to plugin binary
I0831 22:27:28.451778   32162 main.go:141] libmachine: (functional-882363) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882363 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d92
8924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kind
netd:v20240730-75a5af0c"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc5
9433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","r
epoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"23dfc155623278939bd1ac8005ea9225c9f3f5b6a659d55f8e62e7efa30ebc08","repoDigests":["localhost/minikube-local-cache-test@sha256:6a336fb747a6ad63b1452a7d6ce2f9577e55
bb8bc2a4a8eed6642d66006a614d"],"repoTags":["localhost/minikube-local-cache-test:functional-882363"],"size":"3330"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1
ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-882363"],"size":"4943877"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882363 image ls --format json --alsologtostderr:
I0831 22:27:28.052146   32138 out.go:345] Setting OutFile to fd 1 ...
I0831 22:27:28.052392   32138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:28.052401   32138 out.go:358] Setting ErrFile to fd 2...
I0831 22:27:28.052405   32138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:28.052619   32138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
I0831 22:27:28.053191   32138 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:28.053297   32138 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:28.053668   32138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:28.053714   32138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:28.068545   32138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
I0831 22:27:28.068973   32138 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:28.069543   32138 main.go:141] libmachine: Using API Version  1
I0831 22:27:28.069568   32138 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:28.069954   32138 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:28.070136   32138 main.go:141] libmachine: (functional-882363) Calling .GetState
I0831 22:27:28.072058   32138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:28.072109   32138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:28.086558   32138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
I0831 22:27:28.086945   32138 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:28.087462   32138 main.go:141] libmachine: Using API Version  1
I0831 22:27:28.087480   32138 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:28.087835   32138 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:28.088010   32138 main.go:141] libmachine: (functional-882363) Calling .DriverName
I0831 22:27:28.088206   32138 ssh_runner.go:195] Run: systemctl --version
I0831 22:27:28.088245   32138 main.go:141] libmachine: (functional-882363) Calling .GetSSHHostname
I0831 22:27:28.091174   32138 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:28.091682   32138 main.go:141] libmachine: (functional-882363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:57:fc", ip: ""} in network mk-functional-882363: {Iface:virbr1 ExpiryTime:2024-08-31 23:24:24 +0000 UTC Type:0 Mac:52:54:00:d1:57:fc Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-882363 Clientid:01:52:54:00:d1:57:fc}
I0831 22:27:28.091703   32138 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined IP address 192.168.39.142 and MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:28.091919   32138 main.go:141] libmachine: (functional-882363) Calling .GetSSHPort
I0831 22:27:28.092090   32138 main.go:141] libmachine: (functional-882363) Calling .GetSSHKeyPath
I0831 22:27:28.092259   32138 main.go:141] libmachine: (functional-882363) Calling .GetSSHUsername
I0831 22:27:28.092407   32138 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/functional-882363/id_rsa Username:docker}
I0831 22:27:28.177627   32138 ssh_runner.go:195] Run: sudo crictl images --output json
I0831 22:27:28.233383   32138 main.go:141] libmachine: Making call to close driver server
I0831 22:27:28.233398   32138 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:28.233679   32138 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:28.233696   32138 main.go:141] libmachine: Making call to close connection to plugin binary
I0831 22:27:28.233705   32138 main.go:141] libmachine: Making call to close driver server
I0831 22:27:28.233713   32138 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:28.233921   32138 main.go:141] libmachine: (functional-882363) DBG | Closing plugin on server side
I0831 22:27:28.233986   32138 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:28.234008   32138 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882363 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 23dfc155623278939bd1ac8005ea9225c9f3f5b6a659d55f8e62e7efa30ebc08
repoDigests:
- localhost/minikube-local-cache-test@sha256:6a336fb747a6ad63b1452a7d6ce2f9577e55bb8bc2a4a8eed6642d66006a614d
repoTags:
- localhost/minikube-local-cache-test:functional-882363
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-882363
size: "4943877"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882363 image ls --format yaml --alsologtostderr:
I0831 22:27:24.047768   31679 out.go:345] Setting OutFile to fd 1 ...
I0831 22:27:24.047888   31679 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:24.047897   31679 out.go:358] Setting ErrFile to fd 2...
I0831 22:27:24.047902   31679 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:24.048054   31679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
I0831 22:27:24.048554   31679 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:24.048686   31679 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:24.049021   31679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:24.049079   31679 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:24.063767   31679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42117
I0831 22:27:24.064171   31679 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:24.064677   31679 main.go:141] libmachine: Using API Version  1
I0831 22:27:24.064697   31679 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:24.065055   31679 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:24.065295   31679 main.go:141] libmachine: (functional-882363) Calling .GetState
I0831 22:27:24.066879   31679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:24.066912   31679 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:24.080960   31679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
I0831 22:27:24.081291   31679 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:24.081701   31679 main.go:141] libmachine: Using API Version  1
I0831 22:27:24.081722   31679 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:24.082110   31679 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:24.082305   31679 main.go:141] libmachine: (functional-882363) Calling .DriverName
I0831 22:27:24.082590   31679 ssh_runner.go:195] Run: systemctl --version
I0831 22:27:24.082622   31679 main.go:141] libmachine: (functional-882363) Calling .GetSSHHostname
I0831 22:27:24.085489   31679 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:24.085867   31679 main.go:141] libmachine: (functional-882363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:57:fc", ip: ""} in network mk-functional-882363: {Iface:virbr1 ExpiryTime:2024-08-31 23:24:24 +0000 UTC Type:0 Mac:52:54:00:d1:57:fc Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-882363 Clientid:01:52:54:00:d1:57:fc}
I0831 22:27:24.085885   31679 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined IP address 192.168.39.142 and MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:24.086030   31679 main.go:141] libmachine: (functional-882363) Calling .GetSSHPort
I0831 22:27:24.086181   31679 main.go:141] libmachine: (functional-882363) Calling .GetSSHKeyPath
I0831 22:27:24.086299   31679 main.go:141] libmachine: (functional-882363) Calling .GetSSHUsername
I0831 22:27:24.086426   31679 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/functional-882363/id_rsa Username:docker}
I0831 22:27:24.178773   31679 ssh_runner.go:195] Run: sudo crictl images --output json
I0831 22:27:24.268193   31679 main.go:141] libmachine: Making call to close driver server
I0831 22:27:24.268208   31679 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:24.268532   31679 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:24.268549   31679 main.go:141] libmachine: (functional-882363) DBG | Closing plugin on server side
I0831 22:27:24.268552   31679 main.go:141] libmachine: Making call to close connection to plugin binary
I0831 22:27:24.268593   31679 main.go:141] libmachine: Making call to close driver server
I0831 22:27:24.268600   31679 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:24.268807   31679 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:24.268824   31679 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh pgrep buildkitd: exit status 1 (226.795229ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image build -t localhost/my-image:functional-882363 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 image build -t localhost/my-image:functional-882363 testdata/build --alsologtostderr: (5.231088096s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882363 image build -t localhost/my-image:functional-882363 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9e41063aa40
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-882363
--> 3a6bcf5e15e
Successfully tagged localhost/my-image:functional-882363
3a6bcf5e15e97e55c1cc0a39d5a853007fd9c63c6a0b6c5a1b3715f9d9bbd103
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882363 image build -t localhost/my-image:functional-882363 testdata/build --alsologtostderr:
I0831 22:27:24.541660   31761 out.go:345] Setting OutFile to fd 1 ...
I0831 22:27:24.541987   31761 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:24.541999   31761 out.go:358] Setting ErrFile to fd 2...
I0831 22:27:24.542006   31761 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:27:24.542296   31761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
I0831 22:27:24.543077   31761 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:24.543669   31761 config.go:182] Loaded profile config "functional-882363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:27:24.544034   31761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:24.544074   31761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:24.558568   31761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
I0831 22:27:24.559031   31761 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:24.559630   31761 main.go:141] libmachine: Using API Version  1
I0831 22:27:24.559657   31761 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:24.559954   31761 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:24.560133   31761 main.go:141] libmachine: (functional-882363) Calling .GetState
I0831 22:27:24.562075   31761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0831 22:27:24.562127   31761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0831 22:27:24.576169   31761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
I0831 22:27:24.576594   31761 main.go:141] libmachine: () Calling .GetVersion
I0831 22:27:24.577053   31761 main.go:141] libmachine: Using API Version  1
I0831 22:27:24.577075   31761 main.go:141] libmachine: () Calling .SetConfigRaw
I0831 22:27:24.577374   31761 main.go:141] libmachine: () Calling .GetMachineName
I0831 22:27:24.577543   31761 main.go:141] libmachine: (functional-882363) Calling .DriverName
I0831 22:27:24.577735   31761 ssh_runner.go:195] Run: systemctl --version
I0831 22:27:24.577760   31761 main.go:141] libmachine: (functional-882363) Calling .GetSSHHostname
I0831 22:27:24.580320   31761 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:24.580767   31761 main.go:141] libmachine: (functional-882363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:57:fc", ip: ""} in network mk-functional-882363: {Iface:virbr1 ExpiryTime:2024-08-31 23:24:24 +0000 UTC Type:0 Mac:52:54:00:d1:57:fc Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-882363 Clientid:01:52:54:00:d1:57:fc}
I0831 22:27:24.580795   31761 main.go:141] libmachine: (functional-882363) DBG | domain functional-882363 has defined IP address 192.168.39.142 and MAC address 52:54:00:d1:57:fc in network mk-functional-882363
I0831 22:27:24.580992   31761 main.go:141] libmachine: (functional-882363) Calling .GetSSHPort
I0831 22:27:24.581189   31761 main.go:141] libmachine: (functional-882363) Calling .GetSSHKeyPath
I0831 22:27:24.581324   31761 main.go:141] libmachine: (functional-882363) Calling .GetSSHUsername
I0831 22:27:24.581458   31761 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/functional-882363/id_rsa Username:docker}
I0831 22:27:24.704303   31761 build_images.go:161] Building image from path: /tmp/build.3659599476.tar
I0831 22:27:24.704370   31761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0831 22:27:24.729904   31761 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3659599476.tar
I0831 22:27:24.744266   31761 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3659599476.tar: stat -c "%s %y" /var/lib/minikube/build/build.3659599476.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3659599476.tar': No such file or directory
I0831 22:27:24.744297   31761 ssh_runner.go:362] scp /tmp/build.3659599476.tar --> /var/lib/minikube/build/build.3659599476.tar (3072 bytes)
I0831 22:27:24.836414   31761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3659599476
I0831 22:27:24.884950   31761 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3659599476 -xf /var/lib/minikube/build/build.3659599476.tar
I0831 22:27:24.898328   31761 crio.go:315] Building image: /var/lib/minikube/build/build.3659599476
I0831 22:27:24.898413   31761 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-882363 /var/lib/minikube/build/build.3659599476 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0831 22:27:29.699486   31761 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-882363 /var/lib/minikube/build/build.3659599476 --cgroup-manager=cgroupfs: (4.801044791s)
I0831 22:27:29.699576   31761 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3659599476
I0831 22:27:29.713528   31761 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3659599476.tar
I0831 22:27:29.725044   31761 build_images.go:217] Built localhost/my-image:functional-882363 from /tmp/build.3659599476.tar
I0831 22:27:29.725075   31761 build_images.go:133] succeeded building to: functional-882363
I0831 22:27:29.725081   31761 build_images.go:134] failed building to: 
I0831 22:27:29.725107   31761 main.go:141] libmachine: Making call to close driver server
I0831 22:27:29.725117   31761 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:29.725455   31761 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:29.725480   31761 main.go:141] libmachine: Making call to close connection to plugin binary
I0831 22:27:29.725490   31761 main.go:141] libmachine: Making call to close driver server
I0831 22:27:29.725511   31761 main.go:141] libmachine: (functional-882363) Calling .Close
I0831 22:27:29.725519   31761 main.go:141] libmachine: (functional-882363) DBG | Closing plugin on server side
I0831 22:27:29.725774   31761 main.go:141] libmachine: Successfully made call to close driver server
I0831 22:27:29.725787   31761 main.go:141] libmachine: Making call to close connection to plugin binary
I0831 22:27:29.725808   31761 main.go:141] libmachine: (functional-882363) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.963082681s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-882363
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-882363 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-882363 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:345: "hello-node-6b9f76b5c7-b5kq7" [4e256651-4d91-4a71-8bce-329eb8423c10] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:345: "hello-node-6b9f76b5c7-b5kq7" [4e256651-4d91-4a71-8bce-329eb8423c10] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.009148854s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image load --daemon kicbase/echo-server:functional-882363 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-882363 image load --daemon kicbase/echo-server:functional-882363 --alsologtostderr: (2.7503684s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image load --daemon kicbase/echo-server:functional-882363 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-882363
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image load --daemon kicbase/echo-server:functional-882363 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image save kicbase/echo-server:functional-882363 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image rm kicbase/echo-server:functional-882363 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-882363
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 image save --daemon kicbase/echo-server:functional-882363 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-882363
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 service list -o json
functional_test.go:1494: Took "411.458048ms" to run "out/minikube-linux-amd64 -p functional-882363 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.142:30199
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.142:30199
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "299.316144ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.102984ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (28.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdany-port1890236127/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725143216010110632" to /tmp/TestFunctionalparallelMountCmdany-port1890236127/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725143216010110632" to /tmp/TestFunctionalparallelMountCmdany-port1890236127/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725143216010110632" to /tmp/TestFunctionalparallelMountCmdany-port1890236127/001/test-1725143216010110632
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.948305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 31 22:26 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 31 22:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 31 22:26 test-1725143216010110632
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh cat /mount-9p/test-1725143216010110632
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-882363 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:345: "busybox-mount" [c2053644-00b8-4898-9d18-40bf7d18af11] Pending
helpers_test.go:345: "busybox-mount" [c2053644-00b8-4898-9d18-40bf7d18af11] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:345: "busybox-mount" [c2053644-00b8-4898-9d18-40bf7d18af11] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "busybox-mount" [c2053644-00b8-4898-9d18-40bf7d18af11] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 26.004031117s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-882363 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdany-port1890236127/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (28.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "303.220694ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.068079ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdspecific-port1267255700/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.089822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdspecific-port1267255700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh "sudo umount -f /mount-9p": exit status 1 (230.686105ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-882363 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdspecific-port1267255700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2153887758/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2153887758/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2153887758/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T" /mount1: exit status 1 (308.830649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882363 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-882363 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2153887758/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2153887758/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2153887758/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-882363
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-882363
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-882363
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-957517 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0831 22:27:43.737990   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:59.875500   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:30:27.580371   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-957517 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m21.842147374s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-957517 -- rollout status deployment/busybox: (5.587432535s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-cwtrb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-fkvvp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-zdnwd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-cwtrb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-fkvvp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-zdnwd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-cwtrb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-fkvvp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-zdnwd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-cwtrb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-cwtrb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-fkvvp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-fkvvp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-zdnwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-957517 -- exec busybox-7dff88458-zdnwd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-957517 -v=7 --alsologtostderr
E0831 22:31:42.546928   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:42.553348   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:42.564822   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:42.586291   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:42.627691   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:42.709020   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:42.870552   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:43.192615   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:43.834145   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:45.116259   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:47.677920   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:31:52.800233   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:32:03.041588   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-957517 -v=7 --alsologtostderr: (54.586171382s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-957517 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status --output json -v=7 --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp testdata/cp-test.txt ha-957517:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517:/home/docker/cp-test.txt ha-957517-m02:/home/docker/cp-test_ha-957517_ha-957517-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test_ha-957517_ha-957517-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517:/home/docker/cp-test.txt ha-957517-m03:/home/docker/cp-test_ha-957517_ha-957517-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test_ha-957517_ha-957517-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517:/home/docker/cp-test.txt ha-957517-m04:/home/docker/cp-test_ha-957517_ha-957517-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test_ha-957517_ha-957517-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp testdata/cp-test.txt ha-957517-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m02:/home/docker/cp-test.txt ha-957517:/home/docker/cp-test_ha-957517-m02_ha-957517.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test_ha-957517-m02_ha-957517.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m02:/home/docker/cp-test.txt ha-957517-m03:/home/docker/cp-test_ha-957517-m02_ha-957517-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test_ha-957517-m02_ha-957517-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m02:/home/docker/cp-test.txt ha-957517-m04:/home/docker/cp-test_ha-957517-m02_ha-957517-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test_ha-957517-m02_ha-957517-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp testdata/cp-test.txt ha-957517-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt ha-957517:/home/docker/cp-test_ha-957517-m03_ha-957517.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test_ha-957517-m03_ha-957517.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt ha-957517-m02:/home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test_ha-957517-m03_ha-957517-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m03:/home/docker/cp-test.txt ha-957517-m04:/home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test_ha-957517-m03_ha-957517-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp testdata/cp-test.txt ha-957517-m04:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3425674467/001/cp-test_ha-957517-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt ha-957517:/home/docker/cp-test_ha-957517-m04_ha-957517.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517 "sudo cat /home/docker/cp-test_ha-957517-m04_ha-957517.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt ha-957517-m02:/home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m02 "sudo cat /home/docker/cp-test_ha-957517-m04_ha-957517-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 cp ha-957517-m04:/home/docker/cp-test.txt ha-957517-m03:/home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 ssh -n ha-957517-m03 "sudo cat /home/docker/cp-test_ha-957517-m04_ha-957517-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.450384844s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (225.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-957517 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0831 22:51:42.547466   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-957517 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m44.747561696s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (225.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-957517 --control-plane -v=7 --alsologtostderr
E0831 22:54:59.875381   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-957517 --control-plane -v=7 --alsologtostderr: (1m16.738945524s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-957517 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-803658 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0831 22:56:42.547534   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-803658 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.960125253s)
--- PASS: TestJSONOutput/start/Command (53.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-803658 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-803658 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-803658 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-803658 --output=json --user=testUser: (7.351736431s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-077051 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-077051 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.787994ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c118f64f-6f4e-4e6f-b6fb-bee41da992d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-077051] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f78a41f-2681-41eb-a9fc-440912b62a36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"202ffe61-e35e-4817-aac2-474dc3b5d1cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10514cf3-26ce-45b1-82a8-c320cb95a95f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig"}}
	{"specversion":"1.0","id":"a42b8a5e-e699-4206-ba9d-41503f3f694d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube"}}
	{"specversion":"1.0","id":"e7801831-4af4-47e2-95f3-331a918ea987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"062ebcaa-255e-402c-ac19-cf44a7d2968a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"39a808e8-331d-4879-9cbf-c5d938bf4c4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-077051" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-077051
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-599843 --driver=kvm2  --container-runtime=crio
E0831 22:58:02.945106   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-599843 --driver=kvm2  --container-runtime=crio: (44.996923656s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-602658 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-602658 --driver=kvm2  --container-runtime=crio: (43.402873598s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-599843
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-602658
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-602658" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-602658
helpers_test.go:176: Cleaning up "first-599843" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-599843
--- PASS: TestMinikubeProfile (90.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-585489 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-585489 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.857704761s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-585489 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-585489 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-601312 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-601312 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.059575347s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601312 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601312 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-585489 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601312 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601312 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-601312
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-601312: (1.267200417s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-601312
E0831 22:59:59.875051   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-601312: (22.571764062s)
--- PASS: TestMountStart/serial/RestartStopped (23.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601312 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601312 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328486 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0831 23:01:42.546526   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328486 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.356403466s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-328486 -- rollout status deployment/busybox: (4.443325995s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-d8fm4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-qzppw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-d8fm4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-qzppw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-d8fm4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-qzppw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-d8fm4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-d8fm4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-qzppw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328486 -- exec busybox-7dff88458-qzppw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-328486 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-328486 -v 3 --alsologtostderr: (52.69778478s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-328486 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status --output json --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp testdata/cp-test.txt multinode-328486:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1488925976/001/cp-test_multinode-328486.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486:/home/docker/cp-test.txt multinode-328486-m02:/home/docker/cp-test_multinode-328486_multinode-328486-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m02 "sudo cat /home/docker/cp-test_multinode-328486_multinode-328486-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486:/home/docker/cp-test.txt multinode-328486-m03:/home/docker/cp-test_multinode-328486_multinode-328486-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m03 "sudo cat /home/docker/cp-test_multinode-328486_multinode-328486-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp testdata/cp-test.txt multinode-328486-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1488925976/001/cp-test_multinode-328486-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt multinode-328486:/home/docker/cp-test_multinode-328486-m02_multinode-328486.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486 "sudo cat /home/docker/cp-test_multinode-328486-m02_multinode-328486.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486-m02:/home/docker/cp-test.txt multinode-328486-m03:/home/docker/cp-test_multinode-328486-m02_multinode-328486-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m03 "sudo cat /home/docker/cp-test_multinode-328486-m02_multinode-328486-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp testdata/cp-test.txt multinode-328486-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1488925976/001/cp-test_multinode-328486-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt multinode-328486:/home/docker/cp-test_multinode-328486-m03_multinode-328486.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486 "sudo cat /home/docker/cp-test_multinode-328486-m03_multinode-328486.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 cp multinode-328486-m03:/home/docker/cp-test.txt multinode-328486-m02:/home/docker/cp-test_multinode-328486-m03_multinode-328486-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 ssh -n multinode-328486-m02 "sudo cat /home/docker/cp-test_multinode-328486-m03_multinode-328486-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-328486 node stop m03: (1.485638637s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328486 status: exit status 7 (411.199974ms)

                                                
                                                
-- stdout --
	multinode-328486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328486-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328486-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328486 status --alsologtostderr: exit status 7 (408.494353ms)

                                                
                                                
-- stdout --
	multinode-328486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328486-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328486-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:03:19.571528   50269 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:03:19.571760   50269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:03:19.571769   50269 out.go:358] Setting ErrFile to fd 2...
	I0831 23:03:19.571774   50269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:03:19.571932   50269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:03:19.572078   50269 out.go:352] Setting JSON to false
	I0831 23:03:19.572100   50269 mustload.go:65] Loading cluster: multinode-328486
	I0831 23:03:19.572200   50269 notify.go:220] Checking for updates...
	I0831 23:03:19.572421   50269 config.go:182] Loaded profile config "multinode-328486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:03:19.572436   50269 status.go:255] checking status of multinode-328486 ...
	I0831 23:03:19.572792   50269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:03:19.572847   50269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:03:19.591965   50269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
	I0831 23:03:19.592447   50269 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:03:19.592989   50269 main.go:141] libmachine: Using API Version  1
	I0831 23:03:19.593014   50269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:03:19.593335   50269 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:03:19.593543   50269 main.go:141] libmachine: (multinode-328486) Calling .GetState
	I0831 23:03:19.595072   50269 status.go:330] multinode-328486 host status = "Running" (err=<nil>)
	I0831 23:03:19.595086   50269 host.go:66] Checking if "multinode-328486" exists ...
	I0831 23:03:19.595452   50269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:03:19.595488   50269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:03:19.610337   50269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0831 23:03:19.610763   50269 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:03:19.611280   50269 main.go:141] libmachine: Using API Version  1
	I0831 23:03:19.611313   50269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:03:19.611617   50269 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:03:19.611803   50269 main.go:141] libmachine: (multinode-328486) Calling .GetIP
	I0831 23:03:19.614448   50269 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:03:19.614790   50269 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:03:19.614817   50269 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:03:19.614952   50269 host.go:66] Checking if "multinode-328486" exists ...
	I0831 23:03:19.615234   50269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:03:19.615288   50269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:03:19.629790   50269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43587
	I0831 23:03:19.630197   50269 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:03:19.630651   50269 main.go:141] libmachine: Using API Version  1
	I0831 23:03:19.630664   50269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:03:19.631000   50269 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:03:19.631173   50269 main.go:141] libmachine: (multinode-328486) Calling .DriverName
	I0831 23:03:19.631409   50269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:03:19.631440   50269 main.go:141] libmachine: (multinode-328486) Calling .GetSSHHostname
	I0831 23:03:19.633909   50269 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:03:19.634334   50269 main.go:141] libmachine: (multinode-328486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e3:b6", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:00:29 +0000 UTC Type:0 Mac:52:54:00:e8:e3:b6 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-328486 Clientid:01:52:54:00:e8:e3:b6}
	I0831 23:03:19.634363   50269 main.go:141] libmachine: (multinode-328486) DBG | domain multinode-328486 has defined IP address 192.168.39.107 and MAC address 52:54:00:e8:e3:b6 in network mk-multinode-328486
	I0831 23:03:19.634461   50269 main.go:141] libmachine: (multinode-328486) Calling .GetSSHPort
	I0831 23:03:19.634617   50269 main.go:141] libmachine: (multinode-328486) Calling .GetSSHKeyPath
	I0831 23:03:19.634757   50269 main.go:141] libmachine: (multinode-328486) Calling .GetSSHUsername
	I0831 23:03:19.634881   50269 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486/id_rsa Username:docker}
	I0831 23:03:19.710885   50269 ssh_runner.go:195] Run: systemctl --version
	I0831 23:03:19.716728   50269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:03:19.731495   50269 kubeconfig.go:125] found "multinode-328486" server: "https://192.168.39.107:8443"
	I0831 23:03:19.731524   50269 api_server.go:166] Checking apiserver status ...
	I0831 23:03:19.731562   50269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:03:19.745311   50269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0831 23:03:19.754747   50269 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0831 23:03:19.754783   50269 ssh_runner.go:195] Run: ls
	I0831 23:03:19.759198   50269 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0831 23:03:19.763828   50269 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0831 23:03:19.763846   50269 status.go:422] multinode-328486 apiserver status = Running (err=<nil>)
	I0831 23:03:19.763854   50269 status.go:257] multinode-328486 status: &{Name:multinode-328486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 23:03:19.763870   50269 status.go:255] checking status of multinode-328486-m02 ...
	I0831 23:03:19.764134   50269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:03:19.764167   50269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:03:19.778826   50269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33095
	I0831 23:03:19.779171   50269 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:03:19.779639   50269 main.go:141] libmachine: Using API Version  1
	I0831 23:03:19.779657   50269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:03:19.779920   50269 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:03:19.780107   50269 main.go:141] libmachine: (multinode-328486-m02) Calling .GetState
	I0831 23:03:19.781474   50269 status.go:330] multinode-328486-m02 host status = "Running" (err=<nil>)
	I0831 23:03:19.781489   50269 host.go:66] Checking if "multinode-328486-m02" exists ...
	I0831 23:03:19.781778   50269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:03:19.781815   50269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:03:19.796102   50269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0831 23:03:19.796458   50269 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:03:19.796926   50269 main.go:141] libmachine: Using API Version  1
	I0831 23:03:19.796950   50269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:03:19.797273   50269 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:03:19.797437   50269 main.go:141] libmachine: (multinode-328486-m02) Calling .GetIP
	I0831 23:03:19.800109   50269 main.go:141] libmachine: (multinode-328486-m02) DBG | domain multinode-328486-m02 has defined MAC address 52:54:00:1d:43:6e in network mk-multinode-328486
	I0831 23:03:19.800555   50269 main.go:141] libmachine: (multinode-328486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:6e", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:01:33 +0000 UTC Type:0 Mac:52:54:00:1d:43:6e Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-328486-m02 Clientid:01:52:54:00:1d:43:6e}
	I0831 23:03:19.800591   50269 main.go:141] libmachine: (multinode-328486-m02) DBG | domain multinode-328486-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:1d:43:6e in network mk-multinode-328486
	I0831 23:03:19.800730   50269 host.go:66] Checking if "multinode-328486-m02" exists ...
	I0831 23:03:19.801032   50269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:03:19.801081   50269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:03:19.816691   50269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0831 23:03:19.817019   50269 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:03:19.817450   50269 main.go:141] libmachine: Using API Version  1
	I0831 23:03:19.817468   50269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:03:19.817747   50269 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:03:19.817913   50269 main.go:141] libmachine: (multinode-328486-m02) Calling .DriverName
	I0831 23:03:19.818069   50269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:03:19.818085   50269 main.go:141] libmachine: (multinode-328486-m02) Calling .GetSSHHostname
	I0831 23:03:19.820471   50269 main.go:141] libmachine: (multinode-328486-m02) DBG | domain multinode-328486-m02 has defined MAC address 52:54:00:1d:43:6e in network mk-multinode-328486
	I0831 23:03:19.820825   50269 main.go:141] libmachine: (multinode-328486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:43:6e", ip: ""} in network mk-multinode-328486: {Iface:virbr1 ExpiryTime:2024-09-01 00:01:33 +0000 UTC Type:0 Mac:52:54:00:1d:43:6e Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-328486-m02 Clientid:01:52:54:00:1d:43:6e}
	I0831 23:03:19.820846   50269 main.go:141] libmachine: (multinode-328486-m02) DBG | domain multinode-328486-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:1d:43:6e in network mk-multinode-328486
	I0831 23:03:19.821000   50269 main.go:141] libmachine: (multinode-328486-m02) Calling .GetSSHPort
	I0831 23:03:19.821162   50269 main.go:141] libmachine: (multinode-328486-m02) Calling .GetSSHKeyPath
	I0831 23:03:19.821330   50269 main.go:141] libmachine: (multinode-328486-m02) Calling .GetSSHUsername
	I0831 23:03:19.821468   50269 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18943-13149/.minikube/machines/multinode-328486-m02/id_rsa Username:docker}
	I0831 23:03:19.906515   50269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:03:19.920959   50269 status.go:257] multinode-328486-m02 status: &{Name:multinode-328486-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0831 23:03:19.920989   50269 status.go:255] checking status of multinode-328486-m03 ...
	I0831 23:03:19.921354   50269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0831 23:03:19.921397   50269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0831 23:03:19.936300   50269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46599
	I0831 23:03:19.936683   50269 main.go:141] libmachine: () Calling .GetVersion
	I0831 23:03:19.937079   50269 main.go:141] libmachine: Using API Version  1
	I0831 23:03:19.937100   50269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0831 23:03:19.937444   50269 main.go:141] libmachine: () Calling .GetMachineName
	I0831 23:03:19.937649   50269 main.go:141] libmachine: (multinode-328486-m03) Calling .GetState
	I0831 23:03:19.939196   50269 status.go:330] multinode-328486-m03 host status = "Stopped" (err=<nil>)
	I0831 23:03:19.939207   50269 status.go:343] host is not running, skipping remaining checks
	I0831 23:03:19.939213   50269 status.go:257] multinode-328486-m03 status: &{Name:multinode-328486-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-328486 node start m03 -v=7 --alsologtostderr: (39.300918763s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-328486 node delete m03: (1.492092955s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328486 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0831 23:14:42.946929   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328486 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.006801341s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328486 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328486
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328486-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-328486-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.570546ms)

                                                
                                                
-- stdout --
	* [multinode-328486-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-328486-m02' is duplicated with machine name 'multinode-328486-m02' in profile 'multinode-328486'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328486-m03 --driver=kvm2  --container-runtime=crio
E0831 23:14:59.875359   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328486-m03 --driver=kvm2  --container-runtime=crio: (41.99332461s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-328486
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-328486: exit status 80 (196.739875ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-328486 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-328486-m03 already exists in multinode-328486-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-328486-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.09s)

                                                
                                    
x
+
TestScheduledStopUnix (113.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-134090 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-134090 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.182500893s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134090 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-134090 -n scheduled-stop-134090
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134090 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134090 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134090 -n scheduled-stop-134090
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-134090
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134090 --schedule 15s
E0831 23:21:25.613988   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0831 23:21:42.547304   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/functional-882363/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-134090
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-134090: exit status 7 (64.663573ms)

                                                
                                                
-- stdout --
	scheduled-stop-134090
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134090 -n scheduled-stop-134090
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134090 -n scheduled-stop-134090: exit status 7 (65.121684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-134090" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-134090
--- PASS: TestScheduledStopUnix (113.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (230.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3918620593 start -p running-upgrade-741050 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3918620593 start -p running-upgrade-741050 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m10.102154021s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-741050 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-741050 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.168050917s)
helpers_test.go:176: Cleaning up "running-upgrade-741050" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-741050
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-741050: (1.165863065s)
--- PASS: TestRunningBinaryUpgrade (230.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-711704 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-711704 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.893215ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-711704] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-711704 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-711704 --driver=kvm2  --container-runtime=crio: (1m41.731396359s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-711704 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-009399 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-009399 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (526.230512ms)

                                                
                                                
-- stdout --
	* [false-009399] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:23:08.917759   58977 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:23:08.917865   58977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:23:08.917875   58977 out.go:358] Setting ErrFile to fd 2...
	I0831 23:23:08.917879   58977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:23:08.918092   58977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-13149/.minikube/bin
	I0831 23:23:08.918619   58977 out.go:352] Setting JSON to false
	I0831 23:23:08.919579   58977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7536,"bootTime":1725139053,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 23:23:08.919636   58977 start.go:139] virtualization: kvm guest
	I0831 23:23:08.922162   58977 out.go:177] * [false-009399] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0831 23:23:08.923793   58977 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:23:08.923794   58977 notify.go:220] Checking for updates...
	I0831 23:23:08.926967   58977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:23:08.928702   58977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-13149/kubeconfig
	I0831 23:23:08.930496   58977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-13149/.minikube
	I0831 23:23:08.932291   58977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 23:23:08.933979   58977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:23:08.935900   58977 config.go:182] Loaded profile config "NoKubernetes-711704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:23:08.936014   58977 config.go:182] Loaded profile config "offline-crio-651504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:23:08.936124   58977 config.go:182] Loaded profile config "running-upgrade-741050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0831 23:23:08.936217   58977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:23:09.389311   58977 out.go:177] * Using the kvm2 driver based on user configuration
	I0831 23:23:09.390958   58977 start.go:297] selected driver: kvm2
	I0831 23:23:09.390973   58977 start.go:901] validating driver "kvm2" against <nil>
	I0831 23:23:09.390992   58977 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:23:09.393260   58977 out.go:201] 
	W0831 23:23:09.394589   58977 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0831 23:23:09.396194   58977 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-009399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-009399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-009399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-009399"

                                                
                                                
----------------------- debugLogs end: false-009399 [took: 5.335995868s] --------------------------------
helpers_test.go:176: Cleaning up "false-009399" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-009399
--- PASS: TestNetworkPlugins/group/false (6.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (140.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.317666139 start -p stopped-upgrade-032800 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.317666139 start -p stopped-upgrade-032800 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m31.007690945s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.317666139 -p stopped-upgrade-032800 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.317666139 -p stopped-upgrade-032800 stop: (2.134738974s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-032800 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-032800 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.52819214s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (140.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (62.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-711704 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-711704 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m0.750814769s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-711704 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-711704 status -o json: exit status 2 (230.952274ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-711704","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-711704
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-711704: (1.282377995s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (62.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-711704 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0831 23:24:59.875380   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-711704 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.731404883s)
--- PASS: TestNoKubernetes/serial/Start (27.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-711704 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-711704 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.153502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.520020594s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.474259713s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-711704
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-711704: (1.341447334s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-711704 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-711704 --driver=kvm2  --container-runtime=crio: (22.238519078s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-032800
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-711704 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-711704 "sudo systemctl is-active --quiet service kubelet": exit status 1 (182.853431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (53.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-945775 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-945775 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (53.677478128s)
--- PASS: TestPause/serial/Start (53.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m4.114819851s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-009399 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-009399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-fztcd" [454a485e-fd63-412f-82e6-3140f27cde0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-fztcd" [454a485e-fd63-412f-82e6-3140f27cde0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004470152s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-009399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.895039449s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.90662687s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:345: "kindnet-sl5x7" [60a790d7-cc15-4e42-a1f0-2031bbad0926] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.011574397s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-009399 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-009399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-w5zfx" [0dde36cb-0955-4220-83ec-02145cb9479c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-w5zfx" [0dde36cb-0955-4220-83ec-02145cb9479c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005346425s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m18.857430952s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-009399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m24.516261169s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:345: "calico-node-wsntn" [95c9693f-ab8e-45cb-9489-e221b775ee97] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.02665475s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-009399 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-009399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-trhsv" [d17ece45-48a6-4cd5-8bd9-1533a82c1dd1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-trhsv" [d17ece45-48a6-4cd5-8bd9-1533a82c1dd1] Running
E0831 23:31:22.949262   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/addons-132210/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00495487s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-009399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m11.20701212s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-009399 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-009399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-km25k" [fe02bd57-fac5-4584-a3df-3b8968ae6f23] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-km25k" [fe02bd57-fac5-4584-a3df-3b8968ae6f23] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005269497s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-009399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-009399 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-009399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-9m6d7" [67831fb1-be65-4ed8-85c5-5c6d62ade283] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-9m6d7" [67831fb1-be65-4ed8-85c5-5c6d62ade283] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004117732s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-009399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m27.229474194s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-009399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:345: "kube-flannel-ds-j7ddm" [f477c4a0-8d14-4853-ae0b-5e5e652e8873] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00515232s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-009399 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-009399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-kzxhr" [7b5484f1-e532-4cda-9d2c-7af3920e562c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-kzxhr" [7b5484f1-e532-4cda-9d2c-7af3920e562c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003955622s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-009399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-009399 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-009399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-z7b6x" [d6bf0a28-260b-4e97-b508-fdce8b3eaf22] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:33:54.395300   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:54.402400   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:54.413978   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:54.435536   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:54.477427   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:54.559692   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:54.721363   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:55.042622   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:55.684061   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:56.965779   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-z7b6x" [d6bf0a28-260b-4e97-b508-fdce8b3eaf22] Running
E0831 23:33:59.527713   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:34:04.649021   20369 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-13149/.minikube/profiles/auto-009399/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005174345s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-009399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-009399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (37/270)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
220 TestContainerIPsMultiNetwork 0
239 TestChangeNoneUser 0
242 TestScheduledStopWindows 0
244 TestSkaffold 0
246 TestInsufficientStorage 0
250 TestMissingContainerUpgrade 0
256 TestNetworkPlugins/group/kubenet 2.94
264 TestNetworkPlugins/group/cilium 3.25
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork (0s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork
multinetwork_test.go:43: running with runtime:crio goos:linux goarch:amd64
multinetwork_test.go:45: skipping: only docker driver supported
--- SKIP: TestContainerIPsMultiNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-009399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-009399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-009399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-009399"

                                                
                                                
----------------------- debugLogs end: kubenet-009399 [took: 2.79200433s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-009399" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-009399
--- SKIP: TestNetworkPlugins/group/kubenet (2.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-009399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-009399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-009399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-009399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-009399"

                                                
                                                
----------------------- debugLogs end: cilium-009399 [took: 3.111645334s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-009399" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-009399
--- SKIP: TestNetworkPlugins/group/cilium (3.25s)

                                                
                                    
Copied to clipboard